Publications

Displaying 301 - 400 of 516
  • Majid, A., & Levinson, S. C. (2007). The language of vision I: colour. In A. Majid (Ed.), Field Manual Volume 10 (pp. 22-25). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492901.
  • Majid, A. (2010). Words for parts of the body. In B. C. Malt, & P. Wolff (Eds.), Words and the Mind: How words capture human experience (pp. 58-71). New York: Oxford University Press.
  • Malaisé, V., Gazendam, L., & Brugman, H. (2007). Disambiguating automatic semantic annotation based on a thesaurus structure. In Proceedings of TALN 2007.
  • Mamus, E., Rissman, L., Majid, A., & Ozyurek, A. (2019). Effects of blindfolding on verbal and gestural expression of path in auditory motion events. In A. K. Goel, C. M. Seifert, & C. C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2275-2281). Montreal, QB: Cognitive Science Society.

    Abstract

    Studies have claimed that blind people’s spatial representations are different from sighted people, and blind people display superior auditory processing. Due to the nature of auditory and haptic information, it has been proposed that blind people have spatial representations that are more sequential than sighted people. Even the temporary loss of sight—such as through blindfolding—can affect spatial representations, but not much research has been done on this topic. We compared blindfolded and sighted people’s linguistic spatial expressions and non-linguistic localization accuracy to test how blindfolding affects the representation of path in auditory motion events. We found that blindfolded people were as good as sighted people when localizing simple sounds, but they outperformed sighted people when localizing auditory motion events. Blindfolded people’s path related speech also included more sequential, and less holistic elements. Our results indicate that even temporary loss of sight influences spatial representations of auditory motion events
  • Mamus, E., & Karadöller, D. Z. (2018). Anıları Zihinde Canlandırma [Imagery in autobiographical memories]. In S. Gülgöz, B. Ece, & S. Öner (Eds.), Hayatı Hatırlamak: Otobiyografik Belleğe Bilimsel Yaklaşımlar [Remembering Life: Scientific Approaches to Autobiographical Memory] (pp. 185-200). Istanbul, Turkey: Koç University Press.
  • Mani, N., Mishra, R. K., & Huettig, F. (2018). Introduction to 'The Interactive Mind: Language, Vision and Attention'. In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 1-2). Chennai: Macmillan Publishers India.
  • Marcoux, K., & Ernestus, M. (2019). Differences between native and non-native Lombard speech in terms of pitch range. In M. Ochmann, M. Vorländer, & J. Fels (Eds.), Proceedings of the ICA 2019 and EAA Euroregio. 23rd International Congress on Acoustics, integrating 4th EAA Euroregio 2019 (pp. 5713-5720). Berlin: Deutsche Gesellschaft für Akustik.

    Abstract

    Lombard speech, speech produced in noise, is acoustically different from speech produced in quiet (plain speech) in several ways, including having a higher and wider F0 range (pitch). Extensive research on native Lombard speech does not consider that non-natives experience a higher cognitive load while producing
    speech and that the native language may influence the non-native speech. We investigated pitch range in plain and Lombard speech in native and non-natives.
    Dutch and American-English speakers read contrastive question-answer pairs in quiet and in noise in English, while the Dutch also read Dutch sentence pairs. We found that Lombard speech is characterized by a wider pitch range than plain speech, for all speakers (native English, non-native English, and native Dutch).
    This shows that non-natives also widen their pitch range in Lombard speech. In sentences with early-focus, we see the same increase in pitch range when going from plain to Lombard speech in native and non-native English, but a smaller increase in native Dutch. In sentences with late-focus, we see the biggest increase for the native English, followed by non-native English and then native Dutch. Together these results indicate an effect of the native language on non-native Lombard speech.
  • Marcoux, K., & Ernestus, M. (2019). Pitch in native and non-native Lombard speech. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 2605-2609). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Lombard speech, speech produced in noise, is
    typically produced with a higher fundamental
    frequency (F0, pitch) compared to speech in quiet. This paper examined the potential differences in native and non-native Lombard speech by analyzing median pitch in sentences with early- or late-focus produced in quiet and noise. We found an increase in pitch in late-focus sentences in noise for Dutch speakers in both English and Dutch, and for American-English speakers in English. These results
    show that non-native speakers produce Lombard speech, despite their higher cognitive load. For the early-focus sentences, we found a difference between the Dutch and the American-English speakers. Whereas the Dutch showed an increased F0 in noise
    in English and Dutch, the American-English speakers did not in English. Together, these results suggest that some acoustic characteristics of Lombard speech, such as pitch, may be language-specific, potentially
    resulting in the native language influencing the non-native Lombard speech.
  • Massaro, D. W., & Jesse, A. (2007). Audiovisual speech perception and word recognition. In M. G. Gaskell (Ed.), The Oxford handbook of psycholinguistics (pp. 19-35). Oxford: Oxford University Press.

    Abstract

    In most of our everyday conversations, we not only hear but also see each other talk. Our understanding of speech benefits from having the speaker's face present. This finding immediately necessitates the question of how the information from the different perceptual sources is used to reach the best overall decision. This need for processing of multiple sources of information also exists in auditory speech perception, however. Audiovisual speech simply shifts the focus from intramodal to intermodal sources but does not necessitate a qualitatively different form of processing. It is essential that a model of speech perception operationalizes the concept of processing multiple sources of information so that quantitative predictions can be made. This chapter gives an overview of the main research questions and findings unique to audiovisual speech perception and word recognition research as well as what general questions about speech perception and cognition the research in this field can answer. The main theoretical approaches to explain integration and audiovisual speech perception are introduced and critically discussed. The chapter also provides an overview of the role of visual speech as a language learning tool in multimodal training.
  • Matic, D. (2010). Discourse and syntax in linguistic change: Decline of postverbal topical subjects in Serbo-Croat. In G. Ferraresi, & R. Lühr (Eds.), Diachronic studies on information structure: Language acquisition and change (pp. 117-142). Berlin: Mouton de Gruyter.
  • Mazzone, M., & Campisi, E. (2010). Embodiment, metafore, comunicazione. In G. P. Storari, & E. Gola (Eds.), Forme e formalizzazioni. Atti del XVI congresso nazionale. Cagliari: CUEC.
  • Mazzone, M., & Campisi, E. (2010). Are there communicative intentions? In L. A. Pérez Miranda, & A. I. Madariaga (Eds.), Advances in cognitive science. IWCogSc-10. Proceedings of the ILCLI International Workshop on Cognitive Science Workshop on Cognitive Science (pp. 307-322). Bilbao, Spain: The University of the Basque Country.

    Abstract

    Grice in pragmatics and Levelt in psycholinguistics have proposed models of human communication where the starting point of communicative action is an individual intention. This assumption, though, has to face serious objections with regard to the alleged existence of explicit representations of the communicative goals to be pursued. Here evidence is surveyed which shows that in fact speaking may ordinarily be a quite automatic activity prompted by contextual cues and driven by behavioural schemata abstracted away from social regularities. On the one hand, this means that there could exist no intentions in the sense of explicit representations of communicative goals, following from deliberate reasoning and triggering the communicative action. On the other hand, however, there are reasons to allow for a weaker notion of intention than this, according to which communication is an intentional affair, after all. Communicative action is said to be intentional in this weaker sense to the extent that it is subject to a double mechanism of control, with respect both to present-directed and future-directed intentions.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., & Cutler, A. (2010). Cognitive processes in speech perception. In W. J. Hardcastle, J. Laver, & F. E. Gibbon (Eds.), The handbook of phonetic sciences (2nd ed., pp. 489-520). Oxford: Blackwell.
  • McQueen, J. M. (2007). Eight questions about spoken-word recognition. In M. G. Gaskell (Ed.), The Oxford handbook of psycholinguistics (pp. 37-53). Oxford: Oxford University Press.

    Abstract

    This chapter is a review of the literature in experimental psycholinguistics on spoken word recognition. It is organized around eight questions. 1. Why are psycholinguists interested in spoken word recognition? 2. What information in the speech signal is used in word recognition? 3. Where are the words in the continuous speech stream? 4. Which words did the speaker intend? 5. When, as the speech signal unfolds over time, are the phonological forms of words recognized? 6. How are words recognized? 7. Whither spoken word recognition? 8. Who are the researchers in the field?
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Meyer, A. S. (2019). Key issues and future directions: Towards a comprehensive cognitive architecture for language use. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 85-96). Cambridge, MA: MIT Press.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Mehler, J., & Cutler, A. (1990). Psycholinguistic implications of phonological diversity among languages. In M. Piattelli-Palmerini (Ed.), Cognitive science in Europe: Issues and trends (pp. 119-134). Rome: Golem.
  • Merkx, D., Frank, S., & Ernestus, M. (2019). Language learning using speech to image retrieval. In Proceedings of Interspeech 2019 (pp. 1841-1845). doi:10.21437/Interspeech.2019-3067.

    Abstract

    Humans learn language by interaction with their environment and listening to other humans. It should also be possible for computational models to learn language directly from speech but so far most approaches require text. We improve on existing neural network approaches to create visually grounded embeddings for spoken utterances. Using a combination of a multi-layer GRU, importance sampling, cyclic learning rates, ensembling and vectorial self-attention our results show a remarkable increase in image-caption retrieval performance over previous work. Furthermore, we investigate which layers in the model learn to recognise words in the input. We find that deeper network layers are better at encoding word presence, although the final layer has slightly lower performance. This shows that our visually grounded sentence encoder learns to recognise words from the input even though it is not explicitly trained for word recognition.
  • Merkx, D., & Scharenborg, O. (2018). Articulatory feature classification using convolutional neural networks. In Proceedings of Interspeech 2018 (pp. 2142-2146). doi:10.21437/Interspeech.2018-2275.

    Abstract

    The ultimate goal of our research is to improve an existing speech-based computational model of human speech recognition on the task of simulating the role of fine-grained phonetic information in human speech processing. As part of this work we are investigating articulatory feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Different approaches have been used to build AF classifiers, most notably multi-layer perceptrons. Recently, deep neural networks have been applied to the task of AF classification. This paper aims to improve AF classification by investigating two different approaches: 1) investigating the usefulness of a deep Convolutional neural network (CNN) for AF classification; 2) integrating the Mel filtering operation into the CNN architecture. The results showed a remarkable improvement in classification accuracy of the CNNs over state-of-the-art AF classification results for Dutch, most notably in the minority classes. Integrating the Mel filtering operation into the CNN architecture did not further improve classification performance.
  • Micklos, A., Macuch Silva, V., & Fay, N. (2018). The prevalence of repair in studies of language evolution. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 316-318). Toruń, Poland: NCU Press. doi:10.12775/3991-1.075.
  • Mitterer, H. (2007). Top-down effects on compensation for coarticulation are not replicable. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1601-1604). Adelaide: Causal Productions.

    Abstract

    Listeners use lexical knowledge to judge what speech sounds they heard. I investigated whether such lexical influences are truly top-down or just reflect a merging of perceptual and lexical constraints. This is achieved by testing whether the lexically determined identity of a phone exerts the appropriate context effects on surrounding phones. The current investigations focuses on compensation for coarticulation in vowel-fricative sequences, where the presence of a rounded vowel (/y/ rather than /i/) leads fricatives to be perceived as /s/ rather than //. This results was consistently found in all three experiments. A vowel was also more likely to be perceived as rounded /y/ if that lead listeners to be perceive words rather than nonwords (Dutch: meny, English id. vs. meni nonword). This lexical influence on the perception of the vowel had, however, no consistent influence on the perception of following fricative.
  • Mitterer, H., & McQueen, J. M. (2007). Tracking perception of pronunciation variation by tracking looks to printed words: The case of word-final /t/. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1929-1932). Dudweiler: Pirrot.

    Abstract

    We investigated perception of words with reduced word-final /t/ using an adapted eyetracking paradigm. Dutch listeners followed spoken instructions to click on printed words which were accompanied on a computer screen by simple shapes (e.g., a circle). Targets were either above or next to their shapes, and the shapes uniquely identified the targets when the spoken forms were ambiguous between words with or without final /t/ (e.g., bult, bump, vs. bul, diploma). Analysis of listeners’ eye-movements revealed, in contrast to earlier results, that listeners use the following segmental context when compensating for /t/-reduction. Reflecting that /t/-reduction is more likely to occur before bilabials, listeners were more likely to look at the /t/-final words if the next word’s first segment was bilabial. This result supports models of speech perception in which prelexical phonological processes use segmental context to modulate word recognition.
  • Mitterer, H. (2007). Behavior reflects the (degree of) reality of phonological features in the brain as well. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 127-130). Dudweiler: Pirrot.

    Abstract

    To assess the reality of phonological features in language processing (vs. language description), one needs to specify the distinctive claims of distinctive-feature theory. Two of the more farreaching claims are compositionality and generalizability. I will argue that there is some evidence for the first and evidence against the second claim from a recent behavioral paradigm. Highlighting the contribution of a behavioral paradigm also counterpoints the use of brain measures as the only way to elucidate what is "real for the brain". The contributions of the speakers exemplify how brain measures can help us to understand the reality of phonological features in language processing. The evidence is, however, not convincing for a) the claim for underspecification of phonological features—which has to deal with counterevidence from behavioral as well as brain measures—, and b) the claim of position independence of phonological features.
  • Mitterer, H., Brouwer, S., & Huettig, F. (2018). How important is prediction for understanding spontaneous speech? In N. Mani, R. K. Mishra, & F. Huettig (Eds.), The Interactive Mind: Language, Vision and Attention (pp. 26-40). Chennai: Macmillan Publishers India.
  • Moisik, S. R., Zhi Yun, D. P., & Dediu, D. (2019). Active adjustment of the cervical spine during pitch production compensates for shape: The ArtiVarK study. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 864-868). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    The anterior lordosis of the cervical spine is thought
    to contribute to pitch (fo) production by influencing
    cricoid rotation as a function of larynx height. This
    study examines the matter of inter-individual
    variation in cervical spine shape and whether this has
    an influence on how fo is produced along increasing
    or decreasing scales, using the ArtiVarK dataset,
    which contains real-time MRI pitch production data.
    We find that the cervical spine actively participates in
    fo production, but the amount of displacement
    depends on individual shape. In general, anterior
    spine motion (tending toward cervical lordosis)
    occurs for low fo, while posterior movement (tending
    towards cervical kyphosis) occurs for high fo.
  • Mulder, K., Ten Bosch, L., & Boves, L. (2018). Analyzing EEG Signals in Auditory Speech Comprehension Using Temporal Response Functions and Generalized Additive Models. In Proceedings of Interspeech 2018 (pp. 1452-1456). doi:10.21437/Interspeech.2018-1676.

    Abstract

    Analyzing EEG signals recorded while participants are listening to continuous speech with the purpose of testing linguistic hypotheses is complicated by the fact that the signals simultaneously reflect exogenous acoustic excitation and endogenous linguistic processing. This makes it difficult to trace subtle differences that occur in mid-sentence position. We apply an analysis based on multivariate temporal response functions to uncover subtle mid-sentence effects. This approach is based on a per-stimulus estimate of the response of the neural system to speech input. Analyzing EEG signals predicted on the basis of the response functions might then bring to light conditionspecific differences in the filtered signals. We validate this approach by means of an analysis of EEG signals recorded with isolated word stimuli. Then, we apply the validated method to the analysis of the responses to the same words in the middle of meaningful sentences.
  • Munro, R., Bethard, S., Kuperman, V., Lai, V. T., Melnick, R., Potts, C., Schnoebelen, T., & Tily, H. (2010). Crowdsourcing and language studies: The new generation of linguistic data. In Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Proceedings of the Workshop (pp. 122-130). Stroudsburg, PA: Association for Computational Linguistics.
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2019). ERP signal analysis with temporal resolution using a time window bank. In Proceedings of Interspeech 2019 (pp. 1208-1212). doi:10.21437/Interspeech.2019-2729.

    Abstract

    In order to study the cognitive processes underlying speech comprehension, neuro-physiological measures (e.g., EEG and MEG), or behavioural measures (e.g., reaction times and response accuracy) can be applied. Compared to behavioural measures, EEG signals can provide a more fine-grained and complementary view of the processes that take place during the unfolding of an auditory stimulus.

    EEG signals are often analysed after having chosen specific time windows, which are usually based on the temporal structure of ERP components expected to be sensitive to the experimental manipulation. However, as the timing of ERP components may vary between experiments, trials, and participants, such a-priori defined analysis time windows may significantly hamper the exploratory power of the analysis of components of interest. In this paper, we explore a wide-window analysis method applied to EEG signals collected in an auditory repetition priming experiment.

    This approach is based on a bank of temporal filters arranged along the time axis in combination with linear mixed effects modelling. Crucially, it permits a temporal decomposition of effects in a single comprehensive statistical model which captures the entire EEG trace.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Norcliffe, E. (2018). Egophoricity and evidentiality in Guambiano (Nam Trik). In S. Floyd, E. Norcliffe, & L. San Roque (Eds.), Egophoricity (pp. 305-345). Amsterdam: Benjamins.

    Abstract

    Egophoric verbal marking is a typological feature common to Barbacoan languages, but otherwise unknown in the Andean sphere. The verbal systems of three out of the four living Barbacoan languages, Cha’palaa, Tsafiki and Awa Pit, have previously been shown to express egophoric contrasts. The status of Guambiano has, however, remained uncertain. In this chapter, I show that there are in fact two layers of egophoric or egophoric-like marking visible in Guambiano’s grammar. Guambiano patterns with certain other (non-Barbacoan) languages in having ego-categories which function within a broader evidential system. It is additionally possible to detect what is possibly a more archaic layer of egophoric marking in Guambiano’s verbal system. This marking may be inherited from a common Barbacoan system, thus pointing to a potential genealogical basis for the egophoric patterning common to these languages. The multiple formal expressions of egophoricity apparent both within and across the four languages reveal how egophoric contrasts are susceptible to structural renewal, suggesting a pan-Barbacoan preoccupation with the linguistic encoding of self-knowledge.
  • Norcliffe, E., Enfield, N. J., Majid, A., & Levinson, S. C. (2010). The grammar of perception. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 7-16). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Omar, R., Henley, S. M., Hailstone, J. C., Sauter, D., Scott, S. K., Fox, N. C., Rossor, M. N., & Warren, J. D. (2007). Recognition of emotions in faces, voices and music in frontotemporal lobar regeneration [Abstract]. Journal of Neurology, Neurosurgery & Psychiatry, 78(9), 1014.

    Abstract

    Frontotemporal lobar degeneration (FTLD) is a group of neurodegenerative conditions characterised by focal frontal and/or temporal lobe atrophy. Patients develop a range of cognitive and behavioural abnormalities, including prominent difficulties in comprehending and expressing emotions, with significant clinical and social consequences. Here we report a systematic prospective analysis of emotion processing in different input modalities in patients with FTLD. We examined recognition of happiness, sadness, fear and anger in facial expressions, non-verbal vocalisations and music in patients with FTLD and in healthy age matched controls. The FTLD group was significantly impaired in all modalities compared with controls, and this effect was most marked for music. Analysing each emotion separately, recognition of negative emotions was impaired in all three modalities in FTLD, and this effect was most marked for fear and anger. Recognition of happiness was deficient only with music. Our findings support the idea that FTLD causes impaired recognition of emotions across input channels, consistent with a common central representation of emotion concepts. Music may be a sensitive probe of emotional deficits in FTLD, perhaps because it requires a more abstract representation of emotion than do animate stimuli such as faces and voices.
  • O'Meara, C., Speed, L. J., San Roque, L., & Majid, A. (2019). Perception Metaphors: A view from diversity. In L. J. Speed, C. O'Meara, L. San Roque, & A. Majid (Eds.), Perception Metaphors (pp. 1-16). Amsterdam: Benjamins.

    Abstract

    Our bodily experiences play an important role in the way that we think and speak. Abstract language is, however, difficult to reconcile with this body-centred view, unless we appreciate the role metaphors play. To explore the role of the senses across semantic domains, we focus on perception metaphors, and examine their realisation across diverse languages, methods, and approaches. To what extent do mappings in perception metaphor adhere to predictions based on our biological propensities; and to what extent is there space for cross-linguistic and cross-cultural variation? We find that while some metaphors have widespread commonality, there is more diversity attested than should be comfortable for universalist accounts.
  • Otake, T., McQueen, J. M., & Cutler, A. (2010). Competition in the perception of spoken Japanese words. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 114-117).

    Abstract

    Japanese listeners detected Japanese words embedded at the end of nonsense sequences (e.g., kaba 'hippopotamus' in gyachikaba). When the final portion of the preceding context together with the initial portion of the word (e.g., here, the sequence chika) was compatible with many lexical competitors, recognition of the embedded word was more difficult than when such a sequence was compatible with few competitors. This clear effect of competition, established here for preceding context in Japanese, joins similar demonstrations, in other languages and for following contexts, to underline that the functional architecture of the human spoken-word recognition system is a universal one.
  • Ozyurek, A. (2007). Processing of multi-modal semantic information: Insights from cross-linguistic comparisons and neurophysiological recordings. In T. Sakamoto (Ed.), Communicating skills of intention (pp. 131-142). Tokyo: Hituzi Syobo Publishing.
  • Ozyurek, A. (2018). Cross-linguistic variation in children’s multimodal utterances. In M. Hickmann, E. Veneziano, & H. Jisa (Eds.), Sources of variation in first language acquisition: Languages, contexts, and learners (pp. 123-138). Amsterdam: Benjamins.

    Abstract

    Our ability to use language is multimodal and requires tight coordination between what is expressed in speech and in gesture, such as pointing or iconic gestures that convey semantic, syntactic and pragmatic information related to speakers’ messages. Interestingly, what is expressed in gesture and how it is coordinated with speech differs in speakers of different languages. This paper discusses recent findings on the development of children’s multimodal expressions taking cross-linguistic variation into account. Although some aspects of speech-gesture development show language-specificity from an early age, it might still take children until nine years of age to exhibit fully adult patterns of cross-linguistic variation. These findings reveal insights about how children coordinate different levels of representations given that their development is constrained by patterns that are specific to their languages.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A., Kita, S., Allen, S., Furman, R., & Brown, A. (2007). How does linguistic framing of events influence co-speech gestures? Insights from crosslinguistic variations and similarities. In K. Liebal, C. Müller, & S. Pika (Eds.), Gestural communication in nonhuman and human primates (pp. 199-218). Amsterdam: Benjamins.

    Abstract

    What are the relations between linguistic encoding and gestural representations of events during online speaking? The few studies that have been conducted on this topic have yielded somewhat incompatible results with regard to whether and how gestural representations of events change with differences in the preferred semantic and syntactic encoding possibilities of languages. Here we provide large scale semantic, syntactic and temporal analyses of speech- gesture pairs that depict 10 different motion events from 20 Turkish and 20 English speakers. We find that the gestural representations of the same events differ across languages when they are encoded by different syntactic frames (i.e., verb-framed or satellite-framed). However, where there are similarities across languages, such as omission of a certain element of the event in the linguistic encoding, gestural representations also look similar and omit the same content. The results are discussed in terms of what gestures reveal about the influence of language specific encoding on on-line thinking patterns and the underlying interactions between speech and gesture during the speaking process.
  • Ozyurek, A., & Woll, B. (2019). Language in the visual modality: Cospeech gesture and sign language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 67-83). Cambridge, MA: MIT Press.
  • Ozyurek, A. (2018). Role of gesture in language processing: Toward a unified account for production and comprehension. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), Oxford Handbook of Psycholinguistics (2nd ed., pp. 592-607). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198786825.013.25.

    Abstract

    Use of language in face-to-face context is multimodal. Production and perception of speech take place in the context of visual articulators such as lips, face, or hand gestures which convey relevant information to what is expressed in speech at different levels of language. While lips convey information at the phonological level, gestures contribute to semantic, pragmatic, and syntactic information, as well as to discourse cohesion. This chapter overviews recent findings showing that speech and gesture (e.g. a drinking gesture as someone says, “Would you like a drink?”) interact during production and comprehension of language at the behavioral, cognitive, and neural levels. Implications of these findings for current psycholinguistic theories and how they can be expanded to consider the multimodal context of language processing are discussed.
  • Ozyurek, A. (2010). The role of iconic gestures in production and comprehension of language: Evidence from brain and behavior. In S. Kopp, & I. Wachsmuth (Eds.), Gesture in embodied communication and human-computer interaction: 8th International Gesture Workshop, GW 2009, Bielefeld, Germany, February 25-27 2009. Revised selected papers (pp. 1-10). Berlin: Springer.
  • Papafragou, A., & Ozturk, O. (2007). Children's acquisition of modality. In Proceedings of the 2nd Conference on Generative Approaches to Language Acquisition North America (GALANA 2) (pp. 320-327). Somerville, Mass.: Cascadilla Press.
  • Papafragou, A. (2007). On the acquisition of modality. In T. Scheffler, & L. Mayol (Eds.), Penn Working Papers in Linguistics. Proceedings of the 30th Annual Penn Linguistics Colloquium (pp. 281-293). Department of Linguistics, University of Pennsylvania.
  • Parhammer*, S. I., Ebersberg*, M., Tippmann*, J., Stärk*, K., Opitz, A., Hinger, B., & Rossi, S. (2019). The influence of distraction on speech processing: How selective is selective attention? In Proceedings of Interspeech 2019 (pp. 3093-3097). doi:10.21437/Interspeech.2019-2699.

    Abstract

    -* indicates shared first authorship -
    The present study investigated the effects of selective attention on the processing of morphosyntactic errors in unattended parts of speech. Two groups of German native (L1) speakers participated in the present study. Participants listened to sentences in which irregular verbs were manipulated in three different conditions (correct, incorrect but attested ablaut pattern, incorrect and crosslinguistically unattested ablaut pattern). In order to track fast dynamic neural reactions to the stimuli, electroencephalography was used. After each sentence, participants in Experiment 1 performed a semantic judgement task, which deliberately distracted the participants from the syntactic manipulations and directed their attention to the semantic content of the sentence. In Experiment 2, participants carried out a syntactic judgement task, which put their attention on the critical stimuli. The use of two different attentional tasks allowed for investigating the impact of selective attention on speech processing and whether morphosyntactic processing steps are performed automatically. In Experiment 2, the incorrect attested condition elicited a larger N400 component compared to the correct condition, whereas in Experiment 1 no differences between conditions were found. These results suggest that the processing of morphosyntactic violations in irregular verbs is not entirely automatic but seems to be strongly affected by selective attention.
  • Pawley, A., & Hammarström, H. (2018). The Trans New Guinea family. In B. Palmer (Ed.), Papuan Languages and Linguistics (pp. 21-196). Berlin: De Gruyter Mouton.
  • Perniss, P. M., Pfau, R., & Steinbach, M. (2007). Can't you see the difference? Sources of variation in sign language structure. In P. M. Perniss, R. Pfau, & M. Steinbach (Eds.), Visible variation: Cross-linguistic studies in sign language narratives (pp. 1-34). Berlin: Mouton de Gruyter.
  • Perniss, P. M. (2007). Locative functions of simultaneous perspective constructions in German sign language narrative. In M. Vermeerbergen, L. Leeson, & O. Crasborn (Eds.), Simultaneity in signed language: Form and function (pp. 27-54). Amsterdam: Benjamins.
  • Petrich, P., Piedrasanta, R., Figuerola, H., & Le Guen, O. (2010). Variantes y variaciones en la percepción de los antepasados entre los Mayas. In A. Monod Becquelin, A. Breton, & M. H. Ruz (Eds.), Figuras Mayas de la diversidad (pp. 255-275). Mérida, Mexico: Universidad autónoma de México.
  • Piai, V., & Zheng, X. (2019). Speaking waves: Neuronal oscillations in language production. In K. D. Federmeier (Ed.), Psychology of Learning and Motivation (pp. 265-302). Elsevier.

    Abstract

    Language production involves the retrieval of information from memory, the planning of an articulatory program, and executive control and self-monitoring. These processes can be related to the domains of long-term memory, motor control, and executive control. Here, we argue that studying neuronal oscillations provides an important opportunity to understand how general neuronal computational principles support language production, also helping elucidate relationships between language and other domains of cognition. For each relevant domain, we provide a brief review of the findings in the literature with respect to neuronal oscillations. Then, we show how similar patterns are found in the domain of language production, both through review of previous literature and novel findings. We conclude that neurophysiological mechanisms, as reflected in modulations of neuronal oscillations, may act as a fundamental basis for bringing together and enriching the fields of language and cognition.
  • Piepers, J., & Redl, T. (2018). Gender-mismatching pronouns in context: The interpretation of possessive pronouns in Dutch and Limburgian. In B. Le Bruyn, & J. Berns (Eds.), Linguistics in the Netherlands 2018 (pp. 97-110). Amsterdam: Benjamins.

    Abstract

    Gender-(mis)matching pronouns have been studied extensively in experiments. However, a phenomenon common to various languages has thus far been overlooked: the systemic use of non-feminine pronouns when referring to female individuals. The present study is the first to provide experimental insights into the interpretation of such a pronoun: Limburgian zien ‘his/its’ and Dutch zijn ‘his/its’ are grammatically ambiguous between masculine and neuter, but while Limburgian zien can refer to women, the Dutch equivalent zijn cannot. Employing an acceptability judgment task, we presented speakers of Limburgian (N = 51) with recordings of sentences in Limburgian featuring zien, and speakers of Dutch (N = 52) with Dutch translations of these sentences featuring zijn. All sentences featured a potential male or female antecedent embedded in a stereotypically male or female context. We found that ratings were higher for sentences in which the pronoun could refer back to the antecedent. For Limburgians, this extended to sentences mentioning female individuals. Context further modulated sentence appreciation. Possible mechanisms regarding the interpretation of zien as coreferential with a female individual will be discussed.
  • Pijls, F., Kempen, G., & Janner, E. (1990). Intelligent modules for Dutch grammar instruction. In J. Pieters, P. Simons, & L. De Leeuw (Eds.), Research on computer-based instruction. Amsterdam: Swets & Zeitlinger.
  • Pluymaekers, M., Ernestus, M., Baayen, R. H., & Booij, G. (2010). Morphological effects on fine phonetic detail: The case of Dutch -igheid. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (Eds.), Laboratory Phonology 10 (pp. 511-532). Berlin: De Gruyter.
  • Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2019). Acoustic specification of upper limb movement in voicing. In A. Grimminger (Ed.), Proceedings of the 6th Gesture and Speech in Interaction – GESPIN 6 (pp. 68-74). Paderborn: Universitaetsbibliothek Paderborn. doi:10.17619/UNIPB/1-812.
  • Pouw, W., & Dixon, J. A. (2019). Quantifying gesture-speech synchrony. In A. Grimminger (Ed.), Proceedings of the 6th Gesture and Speech in Interaction – GESPIN 6 (pp. 75-80). Paderborn: Universitaetsbibliothek Paderborn. doi:10.17619/UNIPB/1-812.

    Abstract

    Spontaneously occurring speech is often seamlessly accompanied by hand gestures. Detailed
    observations of video data suggest that speech and gesture are tightly synchronized in time,
    consistent with a dynamic interplay between body and mind. However, spontaneous gesturespeech
    synchrony has rarely been objectively quantified beyond analyses of video data, which
    do not allow for identification of kinematic properties of gestures. Consequently, the point in
    gesture which is held to couple with speech, the so-called moment of “maximum effort”, has
    been variably equated with the peak velocity, peak acceleration, peak deceleration, or the onset
    of the gesture. In the current exploratory report, we provide novel evidence from motiontracking
    and acoustic data that peak velocity is closely aligned, and shortly leads, the peak pitch
    (F0) of speech

    Additional information

    https://osf.io/9843h/
  • Pye, C., Pfeiler, B., De León, L., Brown, P., & Mateo, P. (2007). Roots or edges? Explaining variation in children's early verb forms across five Mayan languages. In B. Pfeiler (Ed.), Learning indigenous languages: Child language acquisition in Mesoamerica (pp. 15-46). Berlin: Mouton de Gruyter.

    Abstract

    This paper compares the acquisition of verb morphology in five Mayan languages, using a comparative method based on historical linguistics to establish precise equivalences between linguistic categories in the five languages. Earlier work on the acquisition of these languages, based on examination of longitudinal samples of naturally-occuring child language, established that in some of the languages (Tzeltal, Tzotzil) bare roots were the predominant forms for children’s early verbs, but in three other languages (Yukatek, K’iche’, Q’anjobal) unanalyzed portions of the final part of the verb were more likely. That is, children acquiring different Mayan languages initially produce different parts of the adult verb forms. In this paper we analyse the structures of verbs in caregiver speech to these same children, using samples from two-year-old children and their caregivers, and assess the degree to which features of the input might account for the children’s early verb forms in these five Mayan languages. We found that the frequency with which adults produce verbal roots at the extreme right of words and sentences influences the frequency with which children produce bare verb roots in their early verb expressions, while production of verb roots at the extreme left does not, suggesting that the children ignore the extreme left of verbs and sentences when extracting verb roots.
  • Rapold, C. J. (2010). Beneficiary and other roles of the dative in Tashelhiyt. In F. Zúñiga, & S. Kittilä (Eds.), Benefactives and malefactives: Typological perspectives and case studies (pp. 351-376). Amsterdam: Benjamins.

    Abstract

    This paper explores the semantics of the dative in Tashelhiyt, a Berber language from Morocco. After a brief morphosyntactic overview of the dative in this language, I identify a wide range of its semantic roles, including possessor, experiencer, distributive and unintending causer. I arrange these roles in a semantic map and propose semantic links between the roles such as metaphorisation and generalisation. In the light of the Tashelhiyt data, the paper also proposes additions to previous semantic maps of the dative (Haspelmath 1999, 2003) and to Kittilä’s 2005 typology of beneficiary coding.
  • Rapold, C. J. (2010). Defining converbs ten years on - A hitchhikers'guide. In S. Völlmin, A. Amha, C. J. Rapold, & S. Zaugg-Coretti (Eds.), Converbs, medial verbs, clause chaining and related issues (pp. 7-30). Köln: Rüdiger Köppe Verlag.
  • Rapold, C. J. (2007). From demonstratives to verb agreement in Benchnon: A diachronic perspective. In A. Amha, M. Mous, & G. Savà (Eds.), Omotic and Cushitic studies: Papers from the Fourth Cushitic Omotic Conference, Leiden, 10-12 April 2003 (pp. 69-88). Cologne: Rüdiger Köppe.
  • Räsänen, O., Seshadri, S., & Casillas, M. (2018). Comparison of syllabification algorithms and training strategies for robust word count estimation across different languages and recording conditions. In Proceedings of Interspeech 2018 (pp. 1200-1204). doi:10.21437/Interspeech.2018-1047.

    Abstract

    Word count estimation (WCE) from audio recordings has a number of applications, including quantifying the amount of speech that language-learning infants hear in their natural environments, as captured by daylong recordings made with devices worn by infants. To be applicable in a wide range of scenarios and also low-resource domains, WCE tools should be extremely robust against varying signal conditions and require minimal access to labeled training data in the target domain. For this purpose, earlier work has used automatic syllabification of speech, followed by a least-squares-mapping of syllables to word counts. This paper compares a number of previously proposed syllabifiers in the WCE task, including a supervised bi-directional long short-term memory (BLSTM) network that is trained on a language for which high quality syllable annotations are available (a “high resource language”), and reports how the alternative methods compare on different languages and signal conditions. We also explore additive noise and varying-channel data augmentation strategies for BLSTM training, and show how they improve performance in both matching and mismatching languages. Intriguingly, we also find that even though the BLSTM works on languages beyond its training data, the unsupervised algorithms can still outperform it in challenging signal conditions on novel languages.
  • Ravignani, A., Chiandetti, C., & Kotz, S. (2019). Rhythm and music in animal signals. In J. Choe (Ed.), Encyclopedia of Animal Behavior (vol. 1) (2nd ed., pp. 615-622). Amsterdam: Elsevier.
  • Ravignani, A., Garcia, M., Gross, S., de Reus, K., Hoeksema, N., Rubio-Garcia, A., & de Boer, B. (2018). Pinnipeds have something to say about speech and rhythm. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 399-401). Toruń, Poland: NCU Press. doi:10.12775/3991-1.095.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). The role of community size in the emergence of linguistic structure. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 402-404). Toruń, Poland: NCU Press. doi:10.12775/3991-1.096.
  • Reesink, G. (2010). The difference a word makes. In K. A. McElhannon, & G. Reesink (Eds.), A mosaic of languages and cultures: Studies celebrating the career of Karl J. Franklin (pp. 434-446). Dallas, TX: SIL International.

    Abstract

    This paper offers some thoughts on the question what effect language has on the understanding and hence behavior of a human being. It reviews some issues of linguistic relativity, known as the “Sapir-Whorf hypothesis,” suggesting that the culture we grow up in is reflected in the language and that our cognition (and our worldview) is shaped or colored by the conventions developed by our ancestors and peers. This raises questions for the degree of translatability, illustrated by the comparison of two poems by a Dutch poet who spent most of his life in the USA. Mutual understanding, I claim, is possible because we have the cognitive apparatus that allows us to enter different emic systems.
  • Reesink, G. (2010). Prefixation of arguments in West Papuan languages. In M. Ewing, & M. Klamer (Eds.), East Nusantara, typological and areal analyses (pp. 71-95). Canberra: Pacific Linguistics.
  • Reinisch, E., Jesse, A., & Nygaard, L. C. (2010). Tone of voice helps learning the meaning of novel adjectives [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 114). York: University of York.

    Abstract

    To understand spoken words listeners have to cope with seemingly meaningless variability in the speech signal. Speakers vary, for example, their tone of voice (ToV) by changing speaking rate, pitch, vocal effort, and loudness. This variation is independent of "linguistic prosody" such as sentence intonation or speech rhythm. The variation due to ToV, however, is not random. Speakers use, for example, higher pitch when referring to small objects than when referring to large objects and importantly, adult listeners are able to use these non-lexical ToV cues to distinguish between the meanings of antonym pairs (e.g., big-small; Nygaard, Herold, & Namy, 2009). In the present study, we asked whether listeners infer the meaning of novel adjectives from ToV and subsequently interpret these adjectives according to the learned meaning even in the absence of ToV. Moreover, if listeners actually acquire these adjectival meanings, then they should generalize these word meanings to novel referents. ToV would thus be a semantic cue to lexical acquisition. This hypothesis was tested in an exposure-test paradigm with adult listeners. In the experiment listeners' eye movements to picture pairs were monitored. The picture pairs represented the endpoints of the adjectival dimensions big-small, hot-cold, and strong-weak (e.g., an elephant and an ant represented big-small). Four picture pairs per category were used. While viewing the pictures participants listened to lexically unconstraining sentences containing novel adjectives, for example, "Can you find the foppick one?" During exposure, the sentences were spoken in infant-directed speech with the intended adjectival meaning expressed by ToV. Word-meaning pairings were counterbalanced across participants. Each word was repeated eight times. Listeners had no explicit task. To guide listeners' attention to the relation between the words and pictures, three sets of filler trials were included that contained real English adjectives (e.g., full-empty). In the subsequent test phase participants heard the novel adjectives in neutral adult-directed ToV. Test sentences were recorded before the speaker was informed about intended word meanings. Participants had to choose which of two pictures on the screen the speaker referred to. Picture pairs that were presented during the exposure phase and four new picture pairs per category that varied along the critical dimensions were tested. During exposure listeners did not spontaneously direct their gaze to the intended referent at the first presentation. But as indicated by listener's fixation behavior, they quickly learned the relationship between ToV and word meaning over only two exposures. Importantly, during test participants consistently identified the intended referent object even in the absence of informative ToV. Learning was found for all three tested categories and did not depend on whether the picture pairs had been presented during exposure. Listeners thus use ToV not only to distinguish between antonym pairs but they are able to extract word meaning from ToV and assign this meaning to novel words. The newly learned word meanings can then be generalized to novel referents even in the absence of ToV cues. These findings suggest that ToV can be used as a semantic cue to lexical acquisition. References Nygaard, L. C., Herold, D. S., & Namy, L. L. (2009) The semantics of prosody: Acoustic and perceptual evidence of prosodic correlates to word meaning. Cognitive Science, 33. 127-146.
  • Reis, A., Petersson, K. M., & Faísca, L. (2010). Neuroplasticidade: Os efeitos de aprendizagens específicas no cérebro humano. In C. Nunes, & S. N. Jesus (Eds.), Temas actuais em Psicologia (pp. 11-26). Faro: Universidade do Algarve.
  • Reis, A., Faísca, L., Castro, S.-L., & Petersson, K. M. (2010). Preditores da leitura ao longo da escolaridade: Um estudo com alunos do 1 ciclo do ensino básico. In Actas do VII simpósio nacional de investigação em psicologia (pp. 3117-3132).

    Abstract

    A aquisição da leitura decorre ao longo de diversas etapas, desde o momento em que a criança inicia o contacto com o alfabeto até ao momento em que se torna um leitor competente, apto a ler correcta e fluentemente. Compreender a evolução desta competência através de uma análise da diferenciação do peso de variáveis preditoras da leitura possibilita teorizar sobre os mecanismos cognitivos envolvidos nas diferentes fases de desenvolvimento da leitura. Realizámos um estudo transversal com 568 alunos do segundo ao quarto ano do primeiro ciclo do Ensino Básico, em que se avaliou o impacto de capacidades de processamento fonológico, nomeação rápida, conhecimento letra-som e vocabulário, bem como de capacidades cognitivas mais gerais (inteligência não-verbal e memória de trabalho), na exactidão e velocidade da leitura. De uma forma geral, os resultados mostraram que, apesar da consciência fonológica permanecer como o preditor mais importante da exactidão e fluência da leitura, o seu peso decresce à medida que a escolaridade aumenta. Observou-se também que, à medida que o contributo da consciência fonológica para a explicação da velocidade de leitura diminuía, aumentava o contributo de outras variáveis mais associadas ao automatismo e reconhecimento lexical, tais como a nomeação rápida e o vocabulário. Em suma, podemos dizer que ao longo da escolaridade se observa uma alteração dinâmica dos processos cognitivos subjacentes à leitura, o que sugere que a criança evolui de uma estratégia de leitura ancorada em processamentos sub-lexicais, e como tal mais dependente de processamentos fonológicos, para uma estratégia baseada no reconhecimento ortográfico das palavras.
  • Ringersma, J., & Kemps-Snijders, M. (2007). Creating multimedia dictionaries of endangered languages using LEXUS. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 65-68). Baixas, France: ISCA-Int.Speech Communication Assoc.

    Abstract

    This paper reports on the development of a flexible web based lexicon tool, LEXUS. LEXUS is targeted at linguists involved in language documentation (of endangered languages). It allows the creation of lexica within the structure of the proposed ISO LMF standard and uses the proposed concept naming conventions from the ISO data categories, thus enabling interoperability, search and merging. LEXUS also offers the possibility to visualize language, since it provides functionalities to include audio, video and still images to the lexicon. With LEXUS it is possible to create semantic network knowledge bases, using typed relations. The LEXUS tool is free for use. Index Terms: lexicon, web based application, endangered languages, language documentation.
  • Rissman, L., & Majid, A. (2019). Agency drives category structure in instrumental events. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2661-2667). Montreal, QB: Cognitive Science Society.

    Abstract

    Thematic roles such as Agent and Instrument have a long-standing place in theories of event representation. Nonetheless, the structure of these categories has been difficult to determine. We investigated how instrumental events, such as someone slicing bread with a knife, are categorized in English. Speakers described a variety of typical and atypical instrumental events, and we determined the similarity structure of their descriptions using correspondence analysis. We found that events where the instrument is an extension of an intentional agent were most likely to elicit similar language, highlighting the importance of agency in structuring instrumental categories.
  • Roberts, L. (2010). Parsing the L2 input, an overview: Investigating L2 learners’ processing of syntactic ambiguities and dependencies in real-time comprehension. In G. D. Véronique (Ed.), Language, Interaction and Acquisition [Special issue] (pp. 189-205). Amsterdam: Benjamins.

    Abstract

    The acquisition of second language (L2) syntax has been central to the study of L2 acquisition, but recently there has been an interest in how learners apply their L2 syntactic knowledge to the input in real-time comprehension. Investigating L2 learners’ moment-by-moment syntactic analysis during listening or reading of sentence as it unfolds — their parsing of the input — is important, because language learning involves both the acquisition of knowledge and the ability to use it in real time. Using methods employed in monolingual processing research, investigations often focus on the processing of temporary syntactic ambiguities and structural dependencies. Investigating ambiguities involves examining parsing decisions at points in a sentence where there is a syntactic choice and this can offer insights into the nature of the parsing mechanism, and in particular, its processing preferences. Studying the establishment of syntactic dependencies at the critical point in the input allows for an investigation of how and when different kinds of information (e.g., syntactic, semantic, pragmatic) are put to use in real-time interpretation. Within an L2 context, further questions are of interest and familiar from traditional L2 acquisition research. Specifically, how native-like are the parsing procedures that L2 learners apply when processing the L2 input? What is the role of the learner’s first language (L1)? And, what are the effects of individual factors such as age, proficiency/dominance and working memory on L2 parsing? In the current paper I will provide an overview of the findings of some experimental research designed to investigate these questions.
  • Roelofs, A., & Lamers, M. (2007). Modelling the control of visual attention in Stroop-like tasks. In A. S. Meyer, L. R. Wheeldon, & A. Krott (Eds.), Automaticity and control in language processing (pp. 123-142). Hove: Psychology Press.

    Abstract

    The authors discuss the issue of how visual orienting, selective stimulus processing, and vocal response planning are related in Stroop-like tasks. The evidence suggests that visual orienting is dependent on both visual processing and verbal response planning. They also discuss the issue of selective perceptual processing in Stroop-like tasks. The evidence suggests that space-based and object-based attention lead to a Trojan horse effect in the classic Stroop task, which can be moderated by increasing the spatial distance between colour and word and by making colour and word part of different objects. Reducing the presentation duration of the colour-word stimulus or the duration of either the colour or word dimension reduces Stroop interference. This paradoxical finding was correctly simulated by the WEAVER++ model. Finally, the authors discuss evidence on the neural correlates of executive attention, in particular, the ACC. The evidence suggests that the ACC plays a role in regulation itself rather than only signalling the need for regulation.
  • Rojas-Berscia, L. M. (2019). Nominalization in Shawi/Chayahuita. In R. Zariquiey, M. Shibatani, & D. W. Fleck (Eds.), Nominalization in languages of the Americas (pp. 491-514). Amsterdam: Benjamins.

    Abstract

    This paper deals with the Shawi nominalizing suffixes -su’~-ru’~-nu’ ‘general nominalizer’, -napi/-te’/-tun‘performer/agent nominalizer’, -pi’‘patient nominalizer’, and -nan ‘instrument nominalizer’. The goal of this article is to provide a description of nominalization in Shawi. Throughout this paper I apply the Generalized Scale Model (GSM) (Malchukov, 2006) to Shawi verbal nominalizations, with the intention of presenting a formal representation that will provide a basis for future areal and typological studies of nominalization. In addition, I dialogue with Shibatani’s model to see how the loss or gain of categories correlates with the lexical or grammatical nature of nominalizations. strong nominalization in Shawi correlates with lexical nominalization, whereas weak nominalizations correlate with grammatical nominalization. A typology which takes into account the productivity of the nominalizers is also discussed.
  • Rommers, J., & Federmeier, K. D. (2018). Electrophysiological methods. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 247-265). Hoboken: Wiley.
  • Rossi, G. (2010). Interactive written discourse: Pragmatic aspects of SMS communication. In G. Garzone, P. Catenaccio, & C. Degano (Eds.), Diachronic perspectives on genres in specialized communication. Conference Proceedings (pp. 135-138). Milano: CUEM.
  • Rowland, C. F., & Kidd, E. (2019). Key issues and future directions: How do children acquire language? In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 181-185). Cambridge, MA: MIT Press.
  • Rubio-Fernández, P., & Jara-Ettinger, J. (2018). Joint inferences of speakers’ beliefs and referents based on how they speak. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 991-996). Austin, TX: Cognitive Science Society.

    Abstract

    For almost two decades, the poor performance observed with the so-called Director task has been interpreted as evidence of limited use of Theory of Mind in communication. Here we propose a probabilistic model of common ground in referential communication that derives three inferences from an utterance: what the speaker is talking about in a visual context, what she knows about the context, and what referential expressions she prefers. We tested our model by comparing its inferences with those made by human participants and found that it closely mirrors their judgments, whereas an alternative model compromising the hearer’s expectations of cooperativeness and efficiency reveals a worse fit to the human data. Rather than assuming that common ground is fixed in a given exchange and may or may not constrain reference resolution, we show how common ground can be inferred as part of the process of reference assignment.
  • Rubio-Fernández, P. (2019). Theory of mind. In C. Cummins, & N. Katsos (Eds.), The Handbook of Experimental Semantics and Pragmatics (pp. 524-536). Oxford: Oxford University Press.
  • De Ruiter, J. P. (2007). Some multimodal signals in humans. In I. Van de Sluis, M. Theune, E. Reiter, & E. Krahmer (Eds.), Proceedings of the Workshop on Multimodal Output Generation (MOG 2007) (pp. 141-148).

    Abstract

    In this paper, I will give an overview of some well-studied multimodal signals that humans produce while they communicate with other humans, and discuss the implications of those studies for HCI. I will first discuss a conceptual framework that allows us to distinguish between functional and sensory modalities. This distinction is important, as there are multiple functional modalities using the same sensory modality (e.g., facial expression and eye-gaze in the visual modality). A second theoretically important issue is redundancy. Some signals appear to be redundant with a signal in another modality, whereas others give new information or even appear to give conflicting information (see e.g., the work of Susan Goldin-Meadows on speech accompanying gestures). I will argue that multimodal signals are never truly redundant. First, many gestures that appear at first sight to express the same meaning as the accompanying speech generally provide extra (analog) information about manner, path, etc. Second, the simple fact that the same information is expressed in more than one modality is itself a communicative signal. Armed with this conceptual background, I will then proceed to give an overview of some multimodalsignals that have been investigated in human-human research, and the level of understanding we have of the meaning of those signals. The latter issue is especially important for potential implementations of these signals in artificial agents. First, I will discuss pointing gestures. I will address the issue of the timing of pointing gestures relative to the speech it is supposed to support, the mutual dependency between pointing gestures and speech, and discuss the existence of alternative ways of pointing from other cultures. The most frequent form of pointing that does not involve the index finger is a cultural practice called lip-pointing which employs two visual functional modalities, mouth-shape and eye-gaze, simultaneously for pointing. Next, I will address the issue of eye-gaze. A classical study by Kendon (1967) claims that there is a systematic relationship between eye-gaze (at the interlocutor) and turn-taking states. Research at our institute has shown that this relationship is weaker than has often been assumed. If the dialogue setting contains a visible object that is relevant to the dialogue (e.g., a map), the rate of eye-gaze-at-other drops dramatically and its relationship to turn taking disappears completely. The implications for machine generated eye-gaze are discussed. Finally, I will explore a theoretical debate regarding spontaneous gestures. It has often been claimed that the class of gestures that is called iconic by McNeill (1992) are a “window into the mind”. That is, they are claimed to give the researcher (or even the interlocutor) a direct view into the speaker’s thought, without being obscured by the complex transformation that take place when transforming a thought into a verbal utterance. I will argue that this is an illusion. Gestures can be shown to be specifically designed such that the listener can be expected to interpret them. Although the transformations carried out to express a thought in gesture are indeed (partly) different from the corresponding transformations for speech, they are a) complex, and b) severely understudied. This obviously has consequences both for the gesture research agenda, and for the generation of iconic gestures by machines.
  • De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Hagoort, P., & Toni, I. (2007). On the origins of intentions. In P. Haggard, Y. Rossetti, & M. Kawato (Eds.), Sensorimotor foundations of higher cognition (pp. 593-610). Oxford: Oxford University Press.
  • De Ruiter, J. P., & Enfield, N. J. (2007). The BIC model: A blueprint for the communicator. In C. Stephanidis (Ed.), Universal access in Human-Computer Interaction: Applications and services (pp. 251-258). Berlin: Springer.
  • Sadakata, M., Van der Zanden, L., & Sekiyama, K. (2010). Influence of musical training on perception of L2 speech. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 118-121).

    Abstract

    The current study reports specific cases in which a positive transfer of perceptual ability from the music domain to the language domain occurs. We tested whether musical training enhances discrimination and identification performance of L2 speech sounds (timing features, nasal consonants and vowels). Native Dutch and Japanese speakers with different musical training experience, matched for their estimated verbal IQ, participated in the experiments. Results indicated that musical training strongly increases one’s ability to perceive timing information in speech signals. We also found a benefit of musical training on discrimination performance for a subset of the tested vowel contrasts.
  • Saleh, A., Beck, T., Galke, L., & Scherp, A. (2018). Performance comparison of ad-hoc retrieval models over full-text vs. titles of documents. In M. Dobreva, A. Hinze, & M. Žumer (Eds.), Maturity and Innovation in Digital Libraries: 20th International Conference on Asia-Pacific Digital Libraries, ICADL 2018, Hamilton, New Zealand, November 19-22, 2018, Proceedings (pp. 290-303). Cham, Switzerland: Springer.

    Abstract

    While there are many studies on information retrieval models using full-text, there are presently no comparison studies of full-text retrieval vs. retrieval only over the titles of documents. On the one hand, the full-text of documents like scientific papers is not always available due to, e.g., copyright policies of academic publishers. On the other hand, conducting a search based on titles alone has strong limitations. Titles are short and therefore may not contain enough information to yield satisfactory search results. In this paper, we compare different retrieval models regarding their search performance on the full-text vs. only titles of documents. We use different datasets, including the three digital library datasets: EconBiz, IREON, and PubMed. The results show that it is possible to build effective title-based retrieval models that provide competitive results comparable to full-text retrieval. The difference between the average evaluation results of the best title-based retrieval models is only 3% less than those of the best full-text-based retrieval models.
  • San Roque, L. (2018). Egophoric patterns in Duna verbal morphology. In S. Floyd, E. Norcliffe, & L. San Roque (Eds.), Egophoricity (pp. 405-436). Amsterdam: Benjamins.

    Abstract

    In the language Duna (Trans New Guinea), egophoric distributional patterns are a pervasive characteristic of verbal morphology, but do not comprise a single coherent system. Many morphemes, including evidential markers and future time inflections, show strong tendencies to co-occur with ‘informant’ subjects (the speaker in a declarative, the addressee in an interrogative), or alternatively with non-informant subjects. The person sensitivity of the Duna forms is observable in frequency, speaker judgments of sayability, and subject implicatures. Egophoric and non-egophoric distributional patterns are motivated by the individual semantics of the morphemes, their perspective-taking properties, and logical and/or conventionalised expectations of how people experience and talk about events. Distributional tendencies can also be flouted, providing a resource for speakers to convey attitudes towards their own knowledge and experiences, or the knowledge and experiences of others.
  • San Roque, L., Floyd, S., & Norcliffe, E. (2018). Egophoricity: An introduction. In S. Floyd, E. Norcliffe, & L. San Roque (Eds.), Egophoricity (pp. 1-78). Amsterdam: Benjamins.
  • San Roque, L., & Norcliffe, E. (2010). Knowledge asymmetries in grammar and interaction. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 37-44). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.529153.
  • San Roque, L., & Schieffelin, B. B. (2018). Learning how to know. In S. Floyd, E. Norcliffe, & L. San Roque (Eds.), Egophoricity (pp. 437-471). Amsterdam: Benjamins. doi:10.1075/tsl.118.14san.

    Abstract

    Languages with egophoric systems require their users to pay special attention to who knows what in the speech situation, providing formal marking of whether the speaker or addressee has personal knowledge of the event being discussed. Such systems have only recently come to be studied in cross-linguistic perspective. This chapter has two aims in regard to contributing to our understanding of egophoric marking. Firstly, it presents relevant data from a relatively under-described and endangered language, Kaluli (aka Bosavi), spoken in Papua New Guinea. Unusually, Kaluli tense inflections appear to show a mix of both egophoric and first vs non-first person-marking features, as well as other contrasts that are broadly relevant to a typology of egophoricity, such as special constructions for the expression of involuntary experience. Secondly, the chapter makes a preliminary foray into issues concerning egophoric marking and child language, drawing on a naturalistic corpus of child-caregiver interactions. Questions for future investigation raised by the Kaluli data concern, for example, the potentially challenging nature of mastering inflections that are sensitive to both person and speech act type, the possible role of question-answer pairs in children’s acquisition of egophoric morphology, and whether there are special features of epistemic access and authority that relate particularly to child-adult interactions.
  • Sauter, D. (2010). Non-verbal emotional vocalizations across cultures [Abstract]. In E. Zimmermann, & E. Altenmüller (Eds.), Evolution of emotional communication: From sounds in nonhuman mammals to speech and music in man (pp. 15). Hannover: University of Veterinary Medicine Hannover.

    Abstract

    Despite differences in language, culture, and ecology, some human characteristics are similar in people all over the world, while other features vary from one group to the next. These similarities and differences can inform arguments about what aspects of the human mind are part of our shared biological heritage and which are predominantly products of culture and language. I will present data from a cross-cultural project investigating the recognition of non-verbal vocalizations of emotions, such as screams and laughs, across two highly different cultural groups. English participants were compared to individuals from remote, culturally isolated Namibian villages. Vocalizations communicating the so-called “basic emotions” (anger, disgust, fear, joy, sadness, and surprise) were bidirectionally recognised. In contrast, a set of additional positive emotions was only recognised within, but not across, cultural boundaries. These results indicate that a number of primarily negative emotions are associated with vocalizations that can be recognised across cultures, while at least some positive emotions are communicated with culture-specific signals. I will discuss these findings in the context of accounts of emotions at differing levels of analysis, with an emphasis on the often-neglected positive emotions.
  • Sauter, D., Crasborn, O., & Haun, D. B. M. (2010). The role of perceptual learning in emotional vocalizations [Abstract]. In C. Douilliez, & C. Humez (Eds.), Third European Conference on Emotion 2010. Proceedings (pp. 39-39). Lille: Université de Lille.

    Abstract

    Many studies suggest that emotional signals can be recognized across cultures and modalities. But to what extent are these signals innate and to what extent are they learned? This study investigated whether auditory learning is necessary for the production of recognizable emotional vocalizations by examining the vocalizations produced by people born deaf. Recordings were made of eight congenitally deaf Dutch individuals, who produced non-verbal vocalizations of a range of negative and positive emotions. Perception was examined in a forced-choice task with hearing Dutch listeners (n = 25). Considerable variability was found across emotions, suggesting that auditory learning is more important for the acquisition of certain types of vocalizations than for others. In particular, achievement and surprise sounds were relatively poorly recognized. In contrast, amusement and disgust vocalizations were well recognized, suggesting that for some emotions, recognizable vocalizations can develop without any auditory learning. The implications of these results for models of emotional communication are discussed, and other routes of social learning available to the deaf individuals are considered.
  • Sauter, D., Crasborn, O., & Haun, D. B. M. (2010). The role of perceptual learning in emotional vocalizations [Abstract]. Journal of the Acoustical Society of America, 128, 2476.

    Abstract

    Vocalizations like screams and laughs are used to communicate affective states, but what acoustic cues in these signals require vocal learning and which ones are innate? This study investigated the role of auditory learning in the production of non-verbal emotional vocalizations by examining the vocalizations produced by people born deaf. Recordings were made of congenitally deaf Dutch individuals and matched hearing controls, who produced non-verbal vocalizations of a range of negative and positive emotions. Perception was examined in a forced-choice task with hearing Dutch listeners (n = 25), and judgments were analyzed together with acoustic cues, including envelope, pitch, and spectral measures. Considerable variability was found across emotions and acoustic cues, and the two types of information were related for a sub-set of the emotion categories. These results suggest that auditory learning is less important for the acquisition of certain types of vocalizations than for others (particularly amusement and relief), and they also point to a less central role for auditory learning of some acoustic features in affective non-verbal vocalizations. The implications of these results for models of vocal emotional communication are discussed.
  • Schäfer, M., & Haun, D. B. M. (2010). Sharing among children across cultures. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 45-49). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.529154.
  • Scharenborg, O., Ernestus, M., & Wan, V. (2007). Segmentation of speech: Child's play? In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1953-1956). Adelaide: Causal Productions.

    Abstract

    The difficulty of the task of segmenting a speech signal into its words is immediately clear when listening to a foreign language; it is much harder to segment the signal into its words, since the words of the language are unknown. Infants are faced with the same task when learning their first language. This study provides a better understanding of the task that infants face while learning their native language. We employed an automatic algorithm on the task of speech segmentation without prior knowledge of the labels of the phonemes. An analysis of the boundaries erroneously placed inside a phoneme showed that the algorithm consistently placed additional boundaries in phonemes in which acoustic changes occur. These acoustic changes may be as great as the transition from the closure to the burst of a plosive or as subtle as the formant transitions in low or back vowels. Moreover, we found that glottal vibration may attenuate the relevance of acoustic changes within obstruents. An interesting question for further research is how infants learn to overcome the natural tendency to segment these ‘dynamic’ phonemes.
  • Scharenborg, O., & Merkx, D. (2018). The role of articulatory feature representation quality in a computational model of human spoken-word recognition. In Proceedings of the Machine Learning in Speech and Language Processing Workshop (MLSLP 2018).

    Abstract

    Fine-Tracker is a speech-based model of human speech
    recognition. While previous work has shown that Fine-Tracker
    is successful at modelling aspects of human spoken-word
    recognition, its speech recognition performance is not
    comparable to that of human performance, possibly due to
    suboptimal intermediate articulatory feature (AF)
    representations. This study investigates the effect of improved
    AF representations, obtained using a state-of-the-art deep
    convolutional network, on Fine-Tracker’s simulation and
    recognition performance: Although the improved AF quality
    resulted in improved speech recognition; it, surprisingly, did
    not lead to an improvement in Fine-Tracker’s simulation power.
  • Scharenborg, O., & Wan, V. (2007). Can unquantised articulatory feature continuums be modelled? In INTERSPEECH 2007 - 8th Annual Conference of the International Speech Communication Association (pp. 2473-2476). ISCA Archive.

    Abstract

    Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Although termed ‘articulatory’, previous definitions make certain assumptions that are invalid, for instance, that articulators ‘hop’ from one fixed position to the next. In this paper, we studied two methods, based on support vector classification (SVC) and regression (SVR), in which the articulation continuum is modelled without being restricted to using discrete AF value classes. A comparison with a baseline system trained on quantised values of the articulation continuum showed that both SVC and SVR outperform the baseline for two of the three investigated AFs, with improvements up to 5.6% absolute.
  • Scharenborg, O., ten Bosch, L., & Boves, L. (2007). Early decision making in continuous speech. In M. Grimm, & K. Kroschel (Eds.), Robust speech recognition and understanding (pp. 333-350). I-Tech Education and Publishing.
  • Scheu, O., & Zinn, C. (2007). How did the e-learning session go? The student inspector. In Proceedings of the 13th International Conference on Artificial Intelligence and Education (AIED 2007). Amsterdam: IOS Press.

    Abstract

    Good teachers know their students, and exploit this knowledge to adapt or optimise their instruction. Traditional teachers know their students because they interact with them face-to-face in classroom or one-to-one tutoring sessions. In these settings, they can build student models, i.e., by exploiting the multi-faceted nature of human-human communication. In distance-learning contexts, teacher and student have to cope with the lack of such direct interaction, and this must have detrimental effects for both teacher and student. In a past study we have analysed teacher requirements for tracking student actions in computer-mediated settings. Given the results of this study, we have devised and implemented a tool that allows teachers to keep track of their learners'interaction in e-learning systems. We present the tool's functionality and user interfaces, and an evaluation of its usability.
  • Schiller, N. O., & Verdonschot, R. G. (2018). Morphological theory and neurolinguistics. In J. Audring, & F. Masini (Eds.), The Oxford Handbook of Morphological Theory (pp. 554-572). Oxford: Oxford University Press.

    Abstract

    This chapter describes neurolinguistic aspects of morphology, morphological theory, and especially morphological processing. It briefly mentions the main processing models in the literature and how they deal with morphological issues, i.e. full-listing models (all morphologically related words are listed separately in the lexicon and are processed individually), full-parsing or decompositional models (morphologically related words are not listed in the lexicon but are decomposed into their constituent morphemes, each of which is listed in the lexicon), and hybrid, so-called dual route, models (regular morphologically related words are decomposed, irregular words are listed). The chapter also summarizes some important findings from the literature that bear on neurolinguistic aspects of morphological processing, from both language comprehension and language production, taking into consideration neuropsychological patient studies as well as studies employing neuroimaging methods.
  • Schoenmakers, G.-J., & De Swart, P. (2019). Adverbial hurdles in Dutch scrambling. In A. Gattnar, R. Hörnig, M. Störzer, & S. Featherston (Eds.), Proceedings of Linguistic Evidence 2018: Experimental Data Drives Linguistic Theory (pp. 124-145). Tübingen: University of Tübingen.

    Abstract

    This paper addresses the role of the adverb in Dutch direct object scrambling constructions. We report four experiments in which we investigate whether the structural position and the scope sensitivity of the adverb affect acceptability judgments of scrambling constructions and native speakers' tendency to scramble definite objects. We conclude that the type of adverb plays a key role in Dutch word ordering preferences.
  • Schuerman, W. L., McQueen, J. M., & Meyer, A. S. (2019). Speaker statistical averageness modulates word recognition in adverse listening conditions. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1203-1207). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    We tested whether statistical averageness (SA) at the level of the individual speaker could predict a speaker’s intelligibility. 28 female and 21 male speakers of Dutch were recorded producing 336 sentences,
    each containing two target nouns. Recordings were compared to those of all other same-sex speakers using dynamic time warping (DTW). For each sentence, the DTW distance constituted a metric
    of phonetic distance from one speaker to all other speakers. SA comprised the average of these distances. Later, the same participants performed a word recognition task on the target nouns in the same sentences, under three degraded listening conditions. In all three conditions, accuracy increased with SA. This held even when participants listened to their own utterances. These findings suggest that listeners process speech with respect to the statistical
    properties of the language spoken in their community, rather than using their own speech as a reference

Share this page