Publications

Displaying 301 - 400 of 508
  • Levinson, S. C. (2023). On cognitive artifacts. In R. Feldhay (Ed.), The evolution of knowledge: A scientific meeting in honor of Jürgen Renn (pp. 59-78). Berlin: Max Planck Institute for the History of Science.

    Abstract

    Wearing the hat of a cognitive anthropologist rather than an historian, I will try to amplify the ideas of Renn’s cited above. I argue that a particular subclass of material objects, namely “cognitive artifacts,” involves a close coupling of mind and artifact that acts like a brain prosthesis. Simple cognitive artifacts are external objects that act as aids to internal
    computation, and not all cultures have extended inventories of these. Cognitive artifacts in this sense (e.g., calculating or measuring devices) have clearly played a central role in the history of science. But the notion can be widened to take in less material externalizations of cognition, like writing and language itself. A critical question here is how and why this close coupling of internal computation and external device actually works, a rather neglected question to which I’ll suggest some answers.

    Additional information

    link to book
  • Levshina, N. (2023). Communicative efficiency: Language structure and use. Cambridge: Cambridge University Press.

    Abstract

    All living beings try to save effort, and humans are no exception. This groundbreaking book shows how we save time and energy during communication by unconsciously making efficient choices in grammar, lexicon and phonology. It presents a new theory of 'communicative efficiency', the idea that language is designed to be as efficient as possible, as a system of communication. The new framework accounts for the diverse manifestations of communicative efficiency across a typologically broad range of languages, using various corpus-based and statistical approaches to explain speakers' bias towards efficiency. The author's unique interdisciplinary expertise allows her to provide rich evidence from a broad range of language sciences. She integrates diverse insights from over a hundred years of research into this comprehensible new theory, which she presents step-by-step in clear and accessible language. It is essential reading for language scientists, cognitive scientists and anyone interested in language use and communication.
  • Levshina, N. (2023). Testing communicative and learning biases in a causal model of language evolution:A study of cues to Subject and Object. In M. Degano, T. Roberts, G. Sbardolini, & M. Schouwstra (Eds.), The Proceedings of the 23rd Amsterdam Colloquium (pp. 383-387). Amsterdam: University of Amsterdam.
  • Levshina, N. (2023). Word classes in corpus linguistics. In E. Van Lier (Ed.), The Oxford handbook of word classes (pp. 833-850). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198852889.013.34.

    Abstract

    Word classes play a central role in corpus linguistics under the name of parts of speech (POS). Many popular corpora are provided with POS tags. This chapter gives examples of popular tagsets and discusses the methods of automatic tagging. It also considers bottom-up approaches to POS induction, which are particularly important for the ‘poverty of stimulus’ debate in language acquisition research. The choice of optimal POS tagging involves many difficult decisions, which are related to the level of granularity, redundancy at different levels of corpus annotation, cross-linguistic applicability, language-specific descriptive adequacy, and dealing with fuzzy boundaries between POS. The chapter also discusses the problem of flexible word classes and demonstrates how corpus data with POS tags and syntactic dependencies can be used to quantify the level of flexibility in a language.
  • Liesenfeld, A., Lopez, A., & Dingemanse, M. (2023). Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators. In CUI '23: Proceedings of the 5th International Conference on Conversational User Interfaces. doi:10.1145/3571884.3604316.

    Abstract

    Large language models that exhibit instruction-following behaviour represent one of the biggest recent upheavals in conversational interfaces, a trend in large part fuelled by the release of OpenAI's ChatGPT, a proprietary large language model for text generation fine-tuned through reinforcement learning from human feedback (LLM+RLHF). We review the risks of relying on proprietary software and survey the first crop of open-source projects of comparable architecture and functionality. The main contribution of this paper is to show that openness is differentiated, and to offer scientific documentation of degrees of openness in this fast-moving field. We evaluate projects in terms of openness of code, training data, model weights, RLHF data, licensing, scientific documentation, and access methods. We find that while there is a fast-growing list of projects billing themselves as 'open source', many inherit undocumented data of dubious legality, few share the all-important instruction-tuning (a key site where human labour is involved), and careful scientific documentation is exceedingly rare. Degrees of openness are relevant to fairness and accountability at all points, from data collection and curation to model architecture, and from training and fine-tuning to release and deployment.
  • Liesenfeld, A., Lopez, A., & Dingemanse, M. (2023). The timing bottleneck: Why timing and overlap are mission-critical for conversational user interfaces, speech recognition and dialogue systems. In Proceedings of the 24rd Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDial 2023). doi:10.18653/v1/2023.sigdial-1.45.

    Abstract

    Speech recognition systems are a key intermediary in voice-driven human-computer interaction. Although speech recognition works well for pristine monologic audio, real-life use cases in open-ended interactive settings still present many challenges. We argue that timing is mission-critical for dialogue systems, and evaluate 5 major commercial ASR systems for their conversational and multilingual support. We find that word error rates for natural conversational data in 6 languages remain abysmal, and that overlap remains a key challenge (study 1). This impacts especially the recognition of conversational words (study 2), and in turn has dire consequences for downstream intent recognition (study 3). Our findings help to evaluate the current state of conversational ASR, contribute towards multidimensional error analysis and evaluation, and identify phenomena that need most attention on the way to build robust interactive speech technologies.
  • Little, H., Eryılmaz, K., & de Boer, B. (2015). A new artificial sign-space proxy for investigating the emergence of structure and categories in speech. In The Scottish Consortium for ICPhS 2015 (Ed.), The proceedings of the 18th International Congress of Phonetic Sciences. (ICPhS 2015).
  • Little, H., Eryılmaz, K., & de Boer, B. (2015). Linguistic modality affects the creation of structure and iconicity in signals. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. Jennings, & P. Maglio (Eds.), The 37th annual meeting of the Cognitive Science Society (CogSci 2015) (pp. 1392-1398). Austin, TX: Cognitive Science Society.

    Abstract

    Different linguistic modalities (speech or sign) offer different levels at which signals can iconically represent the world. One hypothesis argues that this iconicity has an effect on how linguistic structure emerges. However, exactly how and why these effects might come about is in need of empirical investigation. In this contribution, we present a signal creation experiment in which both the signalling space and the meaning space are manipulated so that different levels and types of iconicity are available between the signals and meanings. Signals are produced using an infrared sensor that detects the hand position of participants to generate auditory feedback. We find evidence that iconicity may be maladaptive for the discrimination of created signals. Further, we implemented Hidden Markov Models to characterise the structure within signals, which was also used to inform a metric for iconicity.
  • Majid, A. (2015). Comparing lexicons cross-linguistically. In J. R. Taylor (Ed.), The Oxford Handbook of the Word (pp. 364-379). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199641604.013.020.

    Abstract

    The lexicon is central to the concerns of disparate disciplines and has correspondingly elicited conflicting proposals about some of its foundational properties. Some suppose that word meanings and their associated concepts are largely universal, while others note that local cultural interests infiltrate every category in the lexicon. This chapter reviews research in two semantic domains—perception and the body—in order to illustrate crosslinguistic similarities and differences in semantic fields. Data is considered from a wide array of languages, especially those from small-scale indigenous communities which are often overlooked. In every lexical field we find considerable variation across cultures, raising the question of where this variation comes from. Is it the result of different ecological or environmental niches, cultural practices, or accidents of historical pasts? Current evidence suggests that diverse pressures differentially shape lexical fields.
  • Majid, A., Evans, N., Gaby, A., & Levinson, S. C. (2011). The semantics of reciprocal constructions across languages: An extensional approach. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 29-60). Amsterdam: Benjamins.

    Abstract

    How similar are reciprocal constructions in the semantic parameters they encode? We investigate this question by using an extensional approach, which examines similarity of meaning by examining how constructions are applied over a set of 64 videoclips depicting reciprocal events (Evans et al. 2004). We apply statistical modelling to descriptions from speakers of 20 languages elicited using the videoclips. We show that there are substantial differences in meaning between constructions of different languages.

    Files private

    Request files
  • Majid, A., & Levinson, S. C. (2011). The language of perception across cultures [Abstract]. Abstracts of the XXth Congress of European Chemoreception Research Organization, ECRO-2010. Publ. in Chemical Senses, 36(1), E7-E8.

    Abstract

    How are the senses structured by the languages we speak, the cultures we inhabit? To what extent is the encoding of perceptual experiences in languages a matter of how the mind/brain is ―wired-up‖ and to what extent is it a question of local cultural preoccupation? The ―Language of Perception‖ project tests the hypothesis that some perceptual domains may be more ―ineffable‖ – i.e. difficult or impossible to put into words – than others. While cognitive scientists have assumed that proximate senses (olfaction, taste, touch) are more ineffable than distal senses (vision, hearing), anthropologists have illustrated the exquisite variation and elaboration the senses achieve in different cultural milieus. The project is designed to test whether the proximate senses are universally ineffable – suggesting an architectural constraint on cognition – or whether they are just accidentally so in Indo-European languages, so expanding the role of cultural interests and preoccupations. To address this question, a standardized set of stimuli of color patches, geometric shapes, simple sounds, tactile textures, smells and tastes have been used to elicit descriptions from speakers of more than twenty languages—including three sign languages. The languages are typologically, genetically and geographically diverse, representing a wide-range of cultures. The communities sampled vary in subsistence modes (hunter-gatherer to industrial), ecological zones (rainforest jungle to desert), dwelling types (rural and urban), and various other parameters. We examine how codable the different sensory modalities are by comparing how consistent speakers are in how they describe the materials in each modality. Our current analyses suggest that taste may, in fact, be the most codable sensorial domain across languages. Moreover, we have identified exquisite elaboration in the olfactory domains in some cultural settings, contrary to some contemporary predictions within the cognitive sciences. These results suggest that differential codability may be at least partly the result of cultural preoccupation. This shows that the senses are not just physiological phenomena but are constructed through linguistic, cultural and social practices.
  • Malt, B. C., Ameel, E., Gennari, S., Imai, M., Saji, N., & Majid, A. (2011). Do words reveal concepts? In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 519-524). Austin, TX: Cognitive Science Society.

    Abstract

    To study concepts, cognitive scientists must first identify some. The prevailing assumption is that they are revealed by words such as triangle, table, and robin. But languages vary dramatically in how they carve up the world by name. Either ordinary concepts must be heavily language-dependent or names cannot be a direct route to concepts. We asked English, Dutch, Spanish, and Japanese speakers to name videos of human locomotion and judge their similarities. We investigated what name inventories and scaling solutions on name similarity and on physical similarity for the groups individually and together suggest about the underlying concepts. Aggregated naming and similarity solutions converged on results distinct from the answers suggested by the word inventories and scaling solutions of any single language. Words such as triangle, table, and robin can help identify the conceptual space of a domain, but they do not directly reveal units of knowledge usefully considered 'concepts'.
  • Malt, B. C., Gennari, S., Imai, M., Ameel, E., Saji, N., & Majid, A. (2015). Where are the concepts? What words can and can’t reveal. In E. Margolis, & S. Laurence (Eds.), The conceptual Mind: New directions in the study of concepts (pp. 291-326). Cambridge, MA: MIT Press.

    Abstract

    Concepts are so fundamental to human cognition that Fodor declared the heart of a cognitive science to be its theory of concepts. To study concepts, though, cognitive scientists need to be able to identify some. The prevailing assumption has been that they are revealed by words such as triangle, table, and robin. But languages vary dramatically in how they carve up the world with names. Either ordinary concepts must be heavily language dependent, or names cannot be a direct route to concepts. We asked speakers of English, Dutch, Spanish, and Japanese to name a set of 36 video clips of human locomotion and to judge the similarities among them. We investigated what name inventories, name extensions, scaling solutions on name similarity, and scaling solutions on nonlinguistic similarity from the groups, individually and together, suggest about the underlying concepts. Aggregated naming data and similarity solutions converged on results distinct from individual languages.
  • Marcus, G., & Fisher, S. E. (2011). Genes and language. In P. Hogan (Ed.), The Cambridge encyclopedia of the language sciences (pp. 341-344). New York: Cambridge University Press.
  • Mark, D. M., Turk, A., Burenhult, N., & Stea, D. (2011). Landscape in language: An introduction. In D. M. Mark, A. G. Turk, N. Burenhult, & D. Stea (Eds.), Landscape in language: Transdisciplinary perspectives (pp. 1-24). Amsterdam: John Benjamins.
  • Mark, D. M., Turk, A., Burenhult, N., & Stea, D. (Eds.). (2011). Landscape in language: Transdisciplinary perspectives. Amsterdam: John Benjamins.

    Abstract

    Landscape is fundamental to human experience. Yet until recently, the study of landscape has been fragmented among the disciplines. This volume focuses on how landscape is represented in language and thought, and what this reveals about the relationships of people to place and to land. Scientists of various disciplines such as anthropologists, geographers, information scientists, linguists, and philosophers address several questions, including: Are there cross-cultural and cross-linguistic variations in the delimitation, classification, and naming of geographic features? Can alternative world-views and conceptualizations of landscape be used to produce culturally-appropriate Geographic Information Systems (GIS)? Topics included ontology of landscape; landscape terms and concepts; toponyms; spiritual aspects of land and landscape terms; research methods; ethical dimensions of the research; and its potential value to indigenous communities involved in this type of research.
  • de Marneffe, M.-C., Tomlinson, J. J., Tice, M., & Sumner, M. (2011). The interaction of lexical frequency and phonetic variation in the perception of accented speech. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 3575-3580). Austin, TX: Cognitive Science Society.

    Abstract

    How listeners understand spoken words despite massive variation in the speech signal is a central issue for linguistic theory. A recent focus on lexical frequency and specificity has proved fruitful in accounting for this phenomenon. Speech perception, though, is a multi-faceted process and likely incorporates a number of mechanisms to map a variable signal to meaning. We examine a well-established language use factor — lexical frequency — and how this factor is integrated with phonetic variability during the perception of accented speech. We show that an integrated perspective highlights a low-level perceptual mechanism that accounts for the perception of accented speech absent native contrasts, while shedding light on the use of interactive language factors in the perception of spoken words.
  • Martin, A., & Van Turennout, M. (2002). Searching for the neural correlates of object priming. In L. R. Squire, & D. L. Schacter (Eds.), The Neuropsychology of Memory (pp. 239-247). New York: Guilford Press.
  • Martin, R. C., & Tan, Y. (2015). Sentence comprehension deficits: Independence and interaction of syntax, semantics, and working memory. In A. E. Hillis (Ed.), Handbook of adult language disorders (2nd ed., pp. 303-327). Boca Raton: CRC Press.
  • Matić, D. (2015). Information structure in linguistics. In J. D. Wright (Ed.), The International Encyclopedia of Social and Behavioral Sciences (2nd ed.) Vol. 12 (pp. 95-99). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.53013-X.

    Abstract

    Information structure is a subfield of linguistic research dealing with the ways speakers encode instructions to the hearer on how to process the message relative to their temporary mental states. To this end, sentences are segmented into parts conveying known and yet-unknown information, usually labeled ‘topic’ and ‘focus.’ Many languages have developed specialized grammatical and lexical means of indicating this segmentation.
  • Matsuo, A., & Duffield, N. (2002). Assessing the generality of knowledge about English ellipsis in SLA. In J. Costa, & M. J. Freitas (Eds.), Proceedings of the GALA 2001 Conference on Language Acquisition (pp. 49-53). Lisboa: Associacao Portuguesa de Linguistica.
  • Matsuo, A., & Duffield, N. (2002). Finiteness and parallelism: Assessing the generality of knowledge about English ellipsis in SLA. In B. Skarabela, S. Fish, & A.-H.-J. Do (Eds.), Proceedings of the 26th Boston University Conference on Language Development (pp. 197-207). Somerville, Massachusetts: Cascadilla Press.
  • Mauner, G., Koenig, J.-P., Melinger, A., & Bienvenue, B. (2002). The lexical source of unexpressed participants and their role in sentence and discourse understanding. In P. Merlo, & S. Stevenson (Eds.), The Lexical Basis of Sentence Processing: Formal, Computational and Experimental Issues (pp. 233-254). Amsterdam: John Benjamins.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Mishra, R., Srinivasan, N., & Huettig, F. (Eds.). (2015). Attention and vision in language processing. Berlin: Springer. doi:10.1007/978-81-322-2443-3.
  • Mitterer, H. (2011). Social accountability influences phonetic alignment. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2442.

    Abstract

    Speakers tend to take over the articulatory habits of their interlocutors [e.g., Pardo, JASA (2006)]. This phonetic alignment could be the consequence of either a social mechanism or a direct and automatic link between speech perception and production. The latter assumes that social variables should have little influence on phonetic alignment. To test that participants were engaged in a "cloze task" (i.e., Stimulus: "In fantasy movies, silver bullets are used to kill ..." Response: "werewolves") with either one or four interlocutors. Given findings with the Asch-conformity paradigm in social psychology, multiple consistent speakers should exert a stronger force on the participant to align. To control the speech style of the interlocutors, their questions and answers were pre-recorded in either a formal or a casual speech style. The stimuli's speech style was then manipulated between participants and was consistent throughout the experiment for a given participant. Surprisingly, participants aligned less with the speech style if there were multiple interlocutors. This may reflect a "diffusion of responsibility:" Participants may find it more important to align when they interact with only one person than with a larger group.
  • Moers, C., Janse, E., & Meyer, A. S. (2015). Probabilistic reduction in reading aloud: A comparison of younger and older adults. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetics Association.

    Abstract

    Frequent and predictable words are generally pronounced with less effort and are therefore acoustically more reduced than less frequent or unpredictable words. Local predictability can be operationalised by Transitional Probability (TP), which indicates how likely a word is to occur given its immediate context. We investigated whether and how probabilistic reduction effects on word durations change with adult age when reading aloud content words embedded in sentences. The results showed equally large frequency effects on verb and noun durations for both younger (Mage = 20 years) and older (Mage = 68 years) adults. Backward TP also affected word duration for younger and older adults alike. ForwardTP, however, had no significant effect on word duration in either age group. Our results resemble earlier findings of more robust BackwardTP effects compared to ForwardTP effects. Furthermore, unlike often reported decline in predictive processing with aging, probabilistic reduction effects remain stable across adulthood.
  • Moisik, S. R., & Dediu, D. (2015). Anatomical biasing and clicks: Preliminary biomechanical modelling. In H. Little (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015) Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences (pp. 8-13). Glasgow: ICPhS.

    Abstract

    It has been observed by several researchers that the Khoisan palate tends to lack a prominent alveolar ridge. A preliminary biomechanical model of click production was created to examine if these sounds might be subject to an anatomical bias associated with alveolar ridge size. Results suggest the bias is plausible, taking the form of decreased articulatory effort and improved volume change characteristics, however, further modelling and experimental research is required to solidify the claim.
  • Morano, L., Ernestus, M., & Ten Bosch, L. (2015). Schwa reduction in low-proficiency L2 speakers: Learning and generalization. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper investigated the learnability and generalizability of French schwa alternation by Dutch low-proficiency second language learners. We trained 40 participants on 24 new schwa words by exposing them equally often to the reduced and full forms of these words. We then assessed participants' accuracy and reaction times to these newly learnt words as well as 24 previously encountered schwa words with an auditory lexical decision task. Our results show learning of the new words in both forms. This suggests that lack of exposure is probably the main cause of learners' difficulties with reduced forms. Nevertheless, the full forms were slightly better recognized than the reduced ones, possibly due to phonetic and phonological properties of the reduced forms. We also observed no generalization to previously encountered words, suggesting that our participants stored both of the learnt word forms and did not create a rule that applies to all schwa words.
  • Mulder, K., Brekelmans, G., & Ernestus, M. (2015). The processing of schwa reduced cognates and noncognates in non-native listeners of English. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    In speech, words are often reduced rather than fully pronounced (e.g., (/ˈsʌmri/ for /ˈsʌməri/, summary). Non-native listeners may have problems in processing these reduced forms, because they have encountered them less often. This paper addresses the question whether this also holds for highly proficient non-natives and for words with similar forms and meanings in the non-natives' mother tongue (i.e., cognates). In an English auditory lexical decision task, natives and highly proficient Dutch non-natives of English listened to cognates and non-cognates that were presented in full or without their post-stress schwa. The data show that highly proficient learners are affected by reduction as much as native speakers. Nevertheless, the two listener groups appear to process reduced forms differently, because non-natives produce more errors on reduced cognates than on non-cognates. While listening to reduced forms, non-natives appear to be hindered by the co-activated lexical representations of cognate forms in their native language.
  • Muysken, P., Hammarström, H., Birchall, J., van Gijn, R., Krasnoukhova, O., & Müller, N. (2015). Linguistic Areas, bottom up or top down? The case of the Guaporé-Mamoré region. In B. Comrie, & L. Golluscio (Eds.), Language Contact and Documentation / Contacto lingüístico y documentación (pp. 205-238). Berlin: De Gruyter.
  • Nabrotzky, J., Ambrazaitis, G., Zellers, M., & House, D. (2023). Temporal alignment of manual gestures’ phase transitions with lexical and post-lexical accentual F0 peaks in spontaneous Swedish interaction. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527194.

    Abstract

    Many studies investigating the temporal alignment of co-speech
    gestures to acoustic units in the speech signal find a close
    coupling of the gestural landmarks and pitch accents or the
    stressed syllable of pitch-accented words. In English, a pitch
    accent is anchored in the lexically stressed syllable. Hence, it is
    unclear whether it is the lexical phonological dimension of
    stress, or the phrase-level prominence that determines the
    details of speech-gesture synchronization. This paper explores
    the relation between gestural phase transitions and accentual F0
    peaks in Stockholm Swedish, which exhibits a lexical pitch
    accent distinction. When produced with phrase-level
    prominence, there are three different configurations of
    lexicality of F0 peaks and the status of the syllable it is aligned
    with. Through analyzing the alignment of the different F0 peaks
    with gestural onsets in spontaneous dyadic conversations, we
    aim to contribute to our understanding of the role of lexical
    prosodic phonology in the co-production of speech and gesture.
    The results, though limited by a small dataset, still suggest
    differences between the three types of peaks concerning which
    types of gesture phase onsets they tend to align with, and how
    well these landmarks align with each other, although these
    differences did not reach significance.
  • Neger, T. M., Rietveld, T., & Janse, E. (2015). Adult age effects in auditory statistical learning. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Statistical learning plays a key role in language processing, e.g., for speech segmentation. Older adults have been reported to show less statistical learning on the basis of visual input than younger adults. Given age-related changes in perception and cognition, we investigated whether statistical learning is also impaired in the auditory modality in older compared to younger adults and whether individual learning ability is associated with measures of perceptual (i.e., hearing sensitivity) and cognitive functioning in both age groups. Thirty younger and thirty older adults performed an auditory artificial-grammar-learning task to assess their statistical learning ability. In younger adults, perceptual effort came at the cost of processing resources required for learning. Inhibitory control (as indexed by Stroop colornaming performance) did not predict auditory learning. Overall, younger and older adults showed the same amount of auditory learning, indicating that statistical learning ability is preserved over the adult life span.
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2015). Exemplar effects arise in a lexical decision task, but only under adverse listening conditions. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper studies the influence of adverse listening conditions on exemplar effects in priming experiments that do not instruct participants to use their episodic memories. We conducted two lexical decision experiments, in which a prime and a target represented the same word type and could be spoken by the same or a different speaker. In Experiment 1, participants listened to clear speech, and showed no exemplar effects: they recognised repetitions by the same speaker as quickly as different speaker repetitions. In Experiment 2, the stimuli contained noise, and exemplar effects did arise. Importantly, Experiment 1 elicited longer average RTs than Experiment 2, a result that contradicts the time-course hypothesis, according to which exemplars only play a role when processing is slow. Instead, our findings support the hypothesis that exemplar effects arise under adverse listening conditions, when participants are stimulated to use their episodic memories in addition to their mental lexicons.
  • Noordman, L. G. M., Vonk, W., Cozijn, R., & Frank, S. (2015). Causal inferences and world knowledge. In E. J. O'Brien, A. E. Cook, & R. F. Lorch (Eds.), Inferences during reading (pp. 260-289). Cambridge, UK: Cambridge University Press.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Noordman, L. G. M., & Vonk, W. (2015). Inferences in Discourse, Psychology of. In J. D. Wright (Ed.), International Encyclopedia of the Social & Behavioral Sciences (2nd ed.) Vol. 12 (pp. 37-44). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.57012-3.

    Abstract

    An inference is defined as the information that is not expressed explicitly by the text but is derived on the basis of the understander's knowledge and is encoded in the mental representation of the text. Inferencing is considered as a central component in discourse understanding. Experimental methods to detect inferences, established findings, and some developments are reviewed. Attention is paid to the relation between inference processes and the brain.
  • Norcliffe, E., Enfield, N. J., Majid, A., & Levinson, S. C. (2011). The grammar of perception. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 1-10). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Norcliffe, E., & Konopka, A. E. (2015). Vision and language in cross-linguistic research on sentence production. In R. K. Mishra, N. Srinivasan, & F. Huettig (Eds.), Attention and vision in language processing (pp. 77-96). New York: Springer. doi:10.1007/978-81-322-2443-3_5.

    Abstract

    To what extent are the planning processes involved in producing sentences fine-tuned to grammatical properties of specific languages? In this chapter we survey the small body of cross-linguistic research that bears on this question, focusing in particular on recent evidence from eye-tracking studies. Because eye-tracking methods provide a very fine-grained temporal measure of how conceptual and linguistic planning unfold in real time, they serve as an important complement to standard psycholinguistic methods. Moreover, the advent of portable eye-trackers in recent years has, for the first time, allowed eye-tracking techniques to be used with language populations that are located far away from university laboratories. This has created the exciting opportunity to extend the typological base of vision-based psycholinguistic research and address key questions in language production with new language comparisons.
  • Nordhoff, S., & Hammarström, H. (2011). Glottolog/Langdoc: Defining dialects, languages, and language families as collections of resources. Proceedings of the First International Workshop on Linked Science 2011 (LISC2011), Bonn, Germany, October 24, 2011.

    Abstract

    This paper describes the Glottolog/Langdoc project, an at- tempt to provide near-total bibliographical coverage of descriptive re- sources to the world's languages. Every reference is treated as a resource, as is every \languoid"[1]. References are linked to the languoids which they describe, and languoids are linked to the references described by them. Family relations between languoids are modeled in SKOS, as are relations across dierent classications of the same languages. This setup allows the representation of languoids as collections of references, render- ing the question of the denition of entities like `Scots', `West-Germanic' or `Indo-European' more empirical.
  • Offrede, T., Mishra, C., Skantze, G., Fuchs, S., & Mooshammer, C. (2023). Do Humans Converge Phonetically When Talking to a Robot? In R. Skarnitzl, & J. Volin (Eds.), Proceedings of the 20th International Congress of Phonetic Sciences (pp. 3507-3511). Prague: GUARANT International.

    Abstract

    Phonetic convergence—i.e., adapting one’s speech
    towards that of an interlocutor—has been shown
    to occur in human-human conversations as well as
    human-machine interactions. Here, we investigate
    the hypothesis that human-to-robot convergence is
    influenced by the human’s perception of the robot
    and by the conversation’s topic. We conducted a
    within-subjects experiment in which 33 participants
    interacted with two robots differing in their eye gaze
    behavior—one looked constantly at the participant;
    the other produced gaze aversions, similarly to a
    human’s behavior. Additionally, the robot asked
    questions with increasing intimacy levels.
    We observed that the speakers tended to converge
    on F0 to the robots. However, this convergence
    to the robots was not modulated by how the
    speakers perceived them or by the topic’s intimacy.
    Interestingly, speakers produced lower F0 means
    when talking about more intimate topics. We
    discuss these findings in terms of current theories of
    conversational convergence.
  • Oostdijk, N., Goedertier, W., Van Eynde, F., Boves, L., Martens, J.-P., Moortgat, M., & Baayen, R. H. (2002). Experiences from the Spoken Dutch Corpus Project. In Third international conference on language resources and evaluation (pp. 340-347). Paris: European Language Resources Association.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A. (2011). Language in our hands: The role of the body in language, cognition and communication [Inaugural lecture]. Nijmegen: Radboud University Nijmegen.

    Abstract

    Even though most studies of language have focused on speech channel and/or viewed language as an
    amodal abstract system, there is growing evidence on the role our bodily actions/ perceptions play in language and communication.
    In this context, Özyürek discusses what our meaningful visible bodily actions reveal about our language capacity. Conducting cross-linguistic, behavioral, and neurobiological research,
    she shows that co-speech gestures reflect the imagistic, iconic aspects of events talked about and at the same time interact with language production and
    comprehension processes. Sign languages can also be characterized having an abstract system of linguistic categories as well as using iconicity in several
    aspects of the language structure and in its processing.
    Studying language multimodally reveals how grounded language is in our visible bodily actions and opens
    up new lines of research to study language in its situated,
    natural face-to-face context.
  • Ozyurek, A., & Perniss, P. M. (2011). Event representations in signed languages. In J. Bohnemeyer, & E. Pederson (Eds.), Event representations in language and cognition (pp. 84-107). New York: Cambridge University Press.
  • Ozyurek, A. (2002). Speech-gesture relationship across languages and in second language learners: Implications for spatial thinking and speaking. In B. Skarabela, S. Fish, & A. H. Do (Eds.), Proceedings of the 26th annual Boston University Conference on Language Development (pp. 500-509). Somerville, MA: Cascadilla Press.
  • Patterson, R. D., & Cutler, A. (1989). Auditory preprocessing and recognition of speech. In A. Baddeley, & N. Bernsen (Eds.), Research directions in cognitive science: A european perspective: Vol. 1. Cognitive psychology (pp. 23-60). London: Erlbaum.
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The role of left inferior frontal Gyrus in the integration of point- ing gestures and speech. In G. Ferré, & M. Tutton (Eds.), Proceedings of the4th GESPIN - Gesture & Speech in Interaction Conference. Nantes: Université de Nantes.

    Abstract

    Comprehension of pointing gestures is fundamental to human communication. However, the neural mechanisms
    that subserve the integration of pointing gestures and speech in visual contexts in comprehension
    are unclear. Here we present the results of an fMRI study in which participants watched images of an
    actor pointing at an object while they listened to her referential speech. The use of a mismatch paradigm
    revealed that the semantic unication of pointing gesture and speech in a triadic context recruits left
    inferior frontal gyrus. Complementing previous ndings, this suggests that left inferior frontal gyrus
    semantically integrates information across modalities and semiotic domains.
  • Pereira Soares, S. M., Chaouch-Orozco, A., & González Alonso, J. (2023). Innovations and challenges in acquisition and processing methodologies for L3/Ln. In J. Cabrelli, A. Chaouch-Orozco, J. González Alonso, S. M. Pereira Soares, E. Puig-Mayenco, & J. Rothman (Eds.), The Cambridge handbook of third language acquisition (pp. 661-682). Cambridge: Cambridge University Press. doi:10.1017/9781108957823.026.

    Abstract

    The advent of psycholinguistic and neurolinguistic methodologies has provided new insights into theories of language acquisition. Sequential multilingualism is no exception, and some of the most recent work on the subject has incorporated a particular focus on language processing. This chapter surveys some of the work on the processing of lexical and morphosyntactic aspects of third or further languages, with different offline and online methodologies. We also discuss how, while increasingly sophisticated techniques and experimental designs have improved our understanding of third language acquisition and processing, simpler but clever designs can answer pressing questions in our theoretical debate. We provide examples of both sophistication and clever simplicity in experimental design, and argue that the field would benefit from incorporating a combination of both concepts into future work.
  • Perlman, M., Paul, J., & Lupyan, G. (2015). Congenitally deaf children generate iconic vocalizations to communicate magnitude. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. R. Maglio (Eds.), Proceedings of the 37th Annual Cognitive Science Society Meeting (CogSci 2015) (pp. 315-320). Austin, TX: Cognitive Science Society.

    Abstract

    From an early age, people exhibit strong links between certain visual (e.g. size) and acoustic (e.g. duration) dimensions. Do people instinctively extend these crossmodal correspondences to vocalization? We examine the ability of congenitally deaf Chinese children and young adults (age M = 12.4 years, SD = 3.7 years) to generate iconic vocalizations to distinguish items with contrasting magnitude (e.g., big vs. small ball). Both deaf and hearing (M = 10.1 years, SD = 0.83 years) participants produced longer, louder vocalizations for greater magnitude items. However, only hearing participants used pitch—higher pitch for greater magnitude – which counters the hypothesized, innate size “frequency code”, but fits with Mandarin language and culture. Thus our results show that the translation of visible magnitude into the duration and intensity of vocalization transcends auditory experience, whereas the use of pitch appears more malleable to linguistic and cultural influence.
  • Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2011). Does space structure spatial language? Linguistic encoding of space in sign languages. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 1595-1600). Austin, TX: Cognitive Science Society.
  • Perry, L., Perlman, M., & Lupyan, G. (2015). Iconicity in English vocabulary and its relation to toddlers’ word learning. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. R. Maglio (Eds.), Proceedings of the 37th Annual Cognitive Science Society Meeting (CogSci 2015) (pp. 315-320). Austin, TX: Cognitive Science Society.

    Abstract

    Scholars have documented substantial classes of iconic vocabulary in many non-Indo-European languages. In comparison, Indo-European languages like English are assumed to be arbitrary outside of a small number of onomatopoeic words. In three experiments, we asked English speakers to rate the iconicity of words from the MacArthur-Bates Communicative Developmental Inventory. We found English—contrary to common belief—exhibits iconicity that correlates with age of acquisition and differs across lexical classes. Words judged as most iconic are learned earlier, in accord with findings that iconic words are easier to learn. We also find that adjectives and verbs are more iconic than nouns, supporting the idea that iconicity provides an extra cue in learning more difficult abstract meanings. Our results provide new evidence for a relationship between iconicity and word learning and suggest iconicity may be a more pervasive property of spoken languages than previously thought.
  • Petersson, K. M., Forkstam, C., Inácio, F., Bramão, I., Araújo, S., Souza, A. C., Silva, S., & Castro, S. L. (2011). Artificial language learning. In A. Trevisan, & V. Wannmacher Pereira (Eds.), Alfabeltização e cognição (pp. 71-90). Porto Alegre, Brasil: Edipucrs.

    Abstract

    Neste artigo fazemos uma revisão breve de investigações actuais com técnicas comportamentais e de neuroimagem funcional sobre a aprendizagem de uma linguagem artificial em crianças e adultos. Na secção final, discutimos uma possível associação entre dislexia e aprendizagem implícita. Resultados recentes sugerem que a presença de um défice ao nível da aprendizagem implícita pode contribuir para as dificuldades de leitura e escrita observadas em indivíduos disléxicos.
  • Petersson, K. M. (2002). Brain physiology. In R. Behn, & C. Veranda (Eds.), Proceedings of The 4th Southern European School of the European Physical Society - Physics in Medicine (pp. 37-38). Montreux: ESF.
  • Phillips, W., & Majid, A. (2011). Emotional sound symbolism. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 16-18). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1005615.
  • Poellmann, K., McQueen, J. M., & Mitterer, H. (2011). The time course of perceptual learning. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 1618-1621). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Two groups of participants were trained to perceive an ambiguous sound [s/f] as either /s/ or /f/ based on lexical bias: One group heard the ambiguous fricative in /s/-final words, the other in /f/-final words. This kind of exposure leads to a recalibration of the /s/-/f/ contrast [e.g., 4]. In order to investigate when and how this recalibration emerges, test trials were interspersed among training and filler trials. The learning effect needed at least 10 clear training items to arise. Its emergence seemed to occur in a rather step-wise fashion. Learning did not improve much after it first appeared. It is likely, however, that the early test trials attracted participants' attention and therefore may have interfered with the learning process.
  • Rai, N. K., Rai, M., Paudyal, N. P., Schikowski, R., Bickel, B., Stoll, S., Gaenszle, M., Banjade, G., Rai, I. P., Bhatta, T. N., Sauppe, S., Rai, R. M., Rai, J. K., Rai, L. K., Rai, D. B., Rai, G., Rai, D., Rai, D. K., Rai, A., Rai, C. K. and 4 moreRai, N. K., Rai, M., Paudyal, N. P., Schikowski, R., Bickel, B., Stoll, S., Gaenszle, M., Banjade, G., Rai, I. P., Bhatta, T. N., Sauppe, S., Rai, R. M., Rai, J. K., Rai, L. K., Rai, D. B., Rai, G., Rai, D., Rai, D. K., Rai, A., Rai, C. K., Rai, S. M., Rai, R. K., Pettigrew, J., & Dirksmeyer, T. (2011). छिन्ताङ शब्दकोश तथा व्याकरण [Chintang Dictionary and Grammar]. Kathmandu, Nepal: Chintang Language Research Program.
  • Rapold, C. J. (2011). Semantics of Khoekhoe reciprocal constructions. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 61-74). Amsterdam: Benjamins.

    Abstract

    This paper identifies four reciprocal construction types in Khoekhoe (Central Khoisan). After a brief description of the morphosyntax of each construction, semantic factors governing their choice are explored. Besides lexical semantics, the number of participants, timing of symmetric subevents, and symmetric conceptualisation are shown to account for the distribution of the four partially competing reciprocal constructions.
  • Raviv, L., & Kirby, S. (2023). Self domestication and the cultural evolution of language. In J. J. Tehrani, J. Kendal, & R. Kendal (Eds.), The Oxford Handbook of Cultural Evolution. Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780198869252.013.60.

    Abstract

    The structural design features of human language emerge in the process of cultural evolution, shaping languages over the course of communication, learning, and transmission. What role does this leave biological evolution? This chapter highlights the biological bases and preconditions that underlie the particular type of prosocial behaviours and cognitive inference abilities that are required for languages to emerge via cultural evolution to begin with.
  • Reesink, G. (2002). The Eastern bird's head languages. In G. Reesink (Ed.), Languages of the Eastern Bird's Head (pp. 1-44). Canberra: Pacific Linguistics.
  • Reesink, G. (2002). A grammar sketch of Sougb. In G. Reesink (Ed.), Languages of the Eastern Bird's Head (pp. 181-275). Canberra: Pacific Linguistics.
  • Reesink, G. (2002). Mansim, a lost language of the Bird's Head. In G. Reesink (Ed.), Languages of the Eastern Bird's Head (pp. 277-340). Canberra: Pacific Linguistics.
  • Regier, T., Khetarpal, N., & Majid, A. (2011). Inferring conceptual structure from cross-language data. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 1488). Austin, TX: Cognitive Science Society.
  • Reinisch, E., & Weber, A. (2011). Adapting to lexical stress in a foreign accent. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 1678-1681). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    An exposure-test paradigm was used to examine whether Dutch listeners can adapt their perception to non-canonical marking of lexical stress in Hungarian-accented Dutch. During exposure, one group of listeners heard only words with correct initial stress, while another group also heard examples of unstressed initial syllables that were marked by high pitch, a possible stress cue in Dutch. Subsequently, listeners’ eye movements to target-competitor pairs with segmental overlap but different stress patterns were tracked while hearing Hungarian-accented Dutch. Listeners who had heard non-canonically produced words previously distinguished target-competitor pairs faster than listeners who had only been exposed to canonical forms before. This suggests that listeners can adapt quickly to speaker-specific realizations of non-canonical lexical stress.
  • Reinisch, E., Weber, A., & Mitterer, H. (2011). Listeners retune phoneme boundaries across languages [Abstract]. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2572-2572.

    Abstract

    Listeners can flexibly retune category boundaries of their native language to adapt to non-canonically produced phonemes. This only occurs, however, if the pronunciation peculiarities can be attributed to stable and not transient speaker-specific characteristics. Listening to someone speaking a second language, listeners could attribute non-canonical pronunciations either to the speaker or to the fact that she is modifying her categories in the second language. We investigated whether, following exposure to Dutch-accented English, Dutch listeners show effects of category retuning during test where they hear the same speaker speaking her native language, Dutch. Exposure was a lexical-decision task where either word-final [f] or [s] was replaced by an ambiguous sound. At test listeners categorized minimal word pairs ending in sounds along an [f]-[s] continuum. Following exposure to English words, Dutch listeners showed boundary shifts of a similar magnitude as following exposure to the same phoneme variants in their native language. This suggests that production patterns in a second language are deemed a stable characteristic. A second experiment suggests that category retuning also occurs when listeners are exposed to and tested with a native speaker of their second language. Listeners thus retune phoneme boundaries across languages.
  • Reis, A., Faísca, L., & Petersson, K. M. (2011). Literacia: Modelo para o estudo dos efeitos de uma aprendizagem específica na cognição e nas suas bases cerebrais. In A. Trevisan, J. J. Mouriño Mosquera, & V. Wannmacher Pereira (Eds.), Alfabeltização e cognição (pp. 23-36). Porto Alegro, Brasil: Edipucrs.

    Abstract

    A aquisição de competências de leitura e de escrita pode ser vista como um processo formal de transmissão cultural, onde interagem factores neurobiológicos e culturais. O treino sistemático exigido pela aprendizagem da leitura e da escrita poderá produzir mudanças quantitativas e qualitativas tanto a nível cognitivo como ao nível da organização do cérebro. Estudar sujeitos iletrados e letrados representa, assim, uma oportunidade para investigar efeitos de uma aprendizagem específica no desenvolvimento cognitivo e suas bases cerebrais. Neste trabalho, revemos um conjunto de investigações comportamentais e com métodos de imagem cerebral que indicam que a literacia tem um impacto nas nossas funções cognitivas e na organização cerebral. Mais especificamente, discutiremos diferenças entre letrados e iletrados para domínios cognitivos verbais e não-verbais, sugestivas de que a arquitectura cognitiva é formatada, em parte, pela aprendizagem da leitura e da escrita. Os dados de neuroimagem funcionais e estruturais são também indicadores que a aquisição de uma ortografia alfabética interfere nos processos de organização e lateralização das funções cognitivas.
  • Roberts, L., Gabriele, P., & Camilla, B. (Eds.). (2011). EUROSLA Yearbook 2011. Amsterdam: John Benjamins.

    Abstract

    The annual conference of the European Second Language Association provides an opportunity for the presentation of second language research with a genuinely European flavour. The theoretical perspectives adopted are wide-ranging and may fall within traditions overlooked elsewhere. Moreover, the studies presented are largely multi-lingual and cross-cultural, as befits the make-up of modern-day Europe. At the same time, the work demonstrates sophisticated awareness of scholarly insights from around the world. The EUROSLA yearbook presents a selection each year of the very best research from the annual conference. Submissions are reviewed and professionally edited, and only those of the highest quality are selected. Contributions are in English.
  • Roberts, S. G., Everett, C., & Blasi, D. (2015). Exploring potential climate effects on the evolution of human sound systems. In H. Little (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences [ICPhS 2015] Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences (pp. 14-19). Glasgow: ICPHS.

    Abstract

    We suggest that it is now possible to conduct research on a topic which might be called evolutionary geophonetics. The main question is how the climate influences the evolution of language. This involves biological adaptations to the climate that may affect biases in production and perception; cultural evolutionary adaptations of the sounds of a language to climatic conditions; and influences of the climate on language diversity and contact. We discuss these ideas with special reference to a recent hypothesis that lexical tone is not adaptive in dry climates (Everett, Blasi & Roberts, 2015).
  • Robinson, S. (2011). Reciprocals in Rotokas. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 195-211). Amsterdam: Benjamins.

    Abstract

    This paper describes the syntax and semantics of reciprocity in the Central dialect of Rotokas, a non-Austronesian (Papuan) language spoken in Bougainville, Papua New Guinea. In Central Rotokas, there are three main reciprocal construction types, which differ formally according to where the reflexive/reciprocal marker (ora-) occurs in the clause: on the verb, on a pronominal argument or adjunct, or on a body part noun. The choice of construction type is determined by two considerations: the valency of the verb (i.e., whether it has one or two core arguments) and whether the reciprocal action is performed on a body part. The construction types are compatible with a wide range of the logical subtypes of reciprocity (strong, melee, chaining, etc.).
  • Roelofs, A. (2002). Storage and computation in spoken word production. In S. Nooteboom, F. Weerman, & F. Wijnen (Eds.), Storage and computation in the language faculty (pp. 183-216). Dordrecht: Kluwer.
  • Roelofs, A. (2002). Modeling of lexical access in speech production: A psycholinguistic perspective on the lexicon. In L. Behrens, & D. Zaefferer (Eds.), The lexicon in focus: Competition and convergence in current lexicology (pp. 75-92). Frankfurt am Main: Lang.
  • Sadakata, M., & McQueen, J. M. (2011). The role of variability in non-native perceptual learning of a Japanese geminate-singleton fricative contrast. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 873-876).

    Abstract

    The current study reports the enhancing effect of a high variability training procedure in the learning of a Japanese geminate-singleton fricative contrast. Dutch natives took part in a five-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. They heard either many repetitions of a limited set of words recorded by a single speaker (simple training) or fewer repetitions of a more variable set of words recorded by multiple speakers (variable training). Pre-post identification evaluations and a transfer test indicated clear benefits of the variable training.
  • Saito, H., & Kita, S. (2002). "Jesuchaa, kooi, imi" no hennshuu ni atat te [On the occasion of editing "Jesuchaa, Kooi, imi"]. In H. Saito, & S. Kita (Eds.), Kooi, jesuchaa, imi [Action, gesture, meaning] (pp. v-xi). Tokyo: Kyooritsu Shuppan.
  • Saito, H., & Kita, S. (Eds.). (2002). Jesuchaa, kooi, imi [Gesture, action, meaning]. Tokyo: Kyooritsu Shuppan.
  • Sander, J., Lieberman, A., & Rowland, C. F. (2023). Exploring joint attention in American Sign Language: The influence of sign familiarity. In M. Goldwater, F. K. Anggoro, B. K. Hayes, & D. C. Ong (Eds.), Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci 2023) (pp. 632-638).

    Abstract

    Children’s ability to share attention with another social partner (i.e., joint attention) has been found to support language development. Despite the large amount of research examining the effects of joint attention on language in hearing population, little is known about how deaf children learning sign languages achieve joint attention with their caregivers during natural social interaction and how caregivers provide and scaffold learning opportunities for their children. The present study investigates the properties and timing of joint attention surrounding familiar and novel naming events and their relationship to children’s vocabulary. Naturalistic play sessions of caretaker-child-dyads using American Sign Language were analyzed in regards to naming events of either familiar or novel object labeling events and the surrounding joint attention events. We observed that most naming events took place in the context of a successful joint attention event and that sign familiarity was related to the timing of naming events within the joint attention events. Our results suggest that caregivers are highly sensitive to their child’s visual attention in interactions and modulate joint attention differently in the context of naming events of familiar vs. novel object labels.
  • Sauermann, A., Höhle, B., Chen, A., & Järvikivi, J. (2011). Intonational marking of focus in different word orders in German children. In M. B. Washburn, K. McKinney-Bock, E. Varis, & A. Sawyer (Eds.), Proceedings of the 28th West Coast Conference on Formal Linguistics (pp. 313-322). Somerville, MA: Cascadilla Proceedings Project.

    Abstract

    The use of word order and intonation to mark focus in child speech has received some attention. However, past work usually examined each device separately or only compared the realizations of focused vs. non-focused constituents. This paper investigates the interaction between word order and intonation in the marking of different focus types in 4- to 5-year old German-speaking children and an adult control group. An answer-reconstruction task was used to elicit syntactic (word order) and intonational focus marking of subject and objects (locus of focus) in three focus types (broad, narrow, and contrastive focus). The results indicate that both children and adults used intonation to distinguish broad from contrastive focus but they differed in the marking of narrow focus. Further, both groups preferred intonation to word order as device for focus marking. But children showed an early sensitivity for the impact of focus type and focus location on word order variation and on phonetic means to mark focus.
  • Scharenborg, O., Boves, L., & de Veth, J. (2002). ASR in a human word recognition model: Generating phonemic input for Shortlist. In J. H. L. Hansen, & B. Pellom (Eds.), ICSLP 2002 - INTERSPEECH 2002 - 7th International Conference on Spoken Language Processing (pp. 633-636). ISCA Archive.

    Abstract

    The current version of the psycholinguistic model of human word recognition Shortlist suffers from two unrealistic constraints. First, the input of Shortlist must consist of a single string of phoneme symbols. Second, the current version of the search in Shortlist makes it difficult to deal with insertions and deletions in the input phoneme string. This research attempts to fully automatically derive a phoneme string from the acoustic signal that is as close as possible to the number of phonemes in the lexical representation of the word. We optimised an Automatic Phone Recogniser (APR) using two approaches, viz. varying the value of the mismatch parameter and optimising the APR output strings on the output of Shortlist. The approaches show that it will be very difficult to satisfy the input requirements of the present version of Shortlist with a phoneme string generated by an APR.
  • Scharenborg, O., Mitterer, H., & McQueen, J. M. (2011). Perceptual learning of liquids. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 149-152).

    Abstract

    Previous research on lexically-guided perceptual learning has focussed on contrasts that differ primarily in local cues, such as plosive and fricative contrasts. The present research had two aims: to investigate whether perceptual learning occurs for a contrast with non-local cues, the /l/-/r/ contrast, and to establish whether STRAIGHT can be used to create ambiguous sounds on an /l/-/r/ continuum. Listening experiments showed lexically-guided learning about the /l/-/r/ contrast. Listeners can thus tune in to unusual speech sounds characterised by non-local cues. Moreover, STRAIGHT can be used to create stimuli for perceptual learning experiments, opening up new research possibilities. Index Terms: perceptual learning, morphing, liquids, human word recognition, STRAIGHT.
  • Scharenborg, O., & Boves, L. (2002). Pronunciation variation modelling in a model of human word recognition. In Pronunciation Modeling and Lexicon Adaptation for Spoken Language Technology [PMLA-2002] (pp. 65-70).

    Abstract

    Due to pronunciation variation, many insertions and deletions of phones occur in spontaneous speech. The psycholinguistic model of human speech recognition Shortlist is not well able to deal with phone insertions and deletions and is therefore not well suited for dealing with real-life input. The research presented in this paper explains how Shortlist can benefit from pronunciation variation modelling in dealing with real-life input. Pronunciation variation was modelled by including variants into the lexicon of Shortlist. A series of experiments was carried out to find the optimal acoustic model set for transcribing the training material that was used as basis for the generation of the variants. The Shortlist experiments clearly showed that Shortlist benefits from pronunciation variation modelling. However, the performance of Shortlist stays far behind the performance of other, more conventional speech recognisers.
  • Schiller, N. O., Costa, A., & Colomé, A. (2002). Phonological encoding of single words: In search of the lost syllable. In C. Gussenhoven, & N. Warner (Eds.), Laboratory Phonology VII (pp. 35-59). Berlin: Mouton de Gruyter.
  • Schiller, N. O., & Verdonschot, R. G. (2015). Accessing words from the mental lexicon. In J. Taylor (Ed.), The Oxford handbook of the word (pp. 481-492). Oxford: Oxford University Press.

    Abstract

    This chapter describes how speakers access words from the mental lexicon. Lexical access is a crucial
    component in the process of transforming thoughts into speech. Some theories consider lexical access to be
    strictly serial and discrete, while others view this process as being cascading or even interactive, i.e. the different
    sub-levels influence each other. We discuss some of the evidence in favour and against these viewpoints, and
    also present arguments regarding the ongoing debate on how words are selected for production. Another important
    issue concerns the access to morphologically complex words such as derived and inflected words, as well as
    compounds. Are these accessed as whole entities from the mental lexicon or are the parts assembled online? This
    chapter tries to provide an answer to that question as well.
  • Schiller, N. O., Schmitt, B., Peters, J., & Levelt, W. J. M. (2002). 'BAnana'or 'baNAna'? Metrical encoding during speech production [Abstract]. In M. Baumann, A. Keinath, & J. Krems (Eds.), Experimentelle Psychologie: Abstracts der 44. Tagung experimentell arbeitender Psychologen. (pp. 195). TU Chemnitz, Philosophische Fakultät.

    Abstract

    The time course of metrical encoding, i.e. stress, during speech production is investigated. In a first experiment, participants were presented with pictures whose bisyllabic Dutch names had initial or final stress (KAno 'canoe' vs. kaNON 'cannon'; capital letters indicate stressed syllables). Picture names were matched for frequency and object recognition latencies. When participants were asked to judge whether picture names had stress on the first or second syllable, they showed significantly faster decision times for initially stressed targets than for targets with final stress. Experiment 2 replicated this effect with trisyllabic picture names (faster RTs for penultimate stress than for ultimate stress). In our view, these results reflect the incremental phonological encoding process. Wheeldon and Levelt (1995) found that segmental encoding is a process running from the beginning to the end of words. Here, we present evidence that the metrical pattern of words, i.e. stress, is also encoded incrementally.
  • Schiller, N. O. (2002). From phonetics to cognitive psychology: Psycholinguistics has it all. In A. Braun, & H. Masthoff (Eds.), Phonetics and its Applications. Festschrift for Jens-Peter Köster on the Occasion of his 60th Birthday. [Beihefte zur Zeitschrift für Dialektologie und Linguistik; 121] (pp. 13-24). Stuttgart: Franz Steiner Verlag.
  • Schmidt, J., Scharenborg, O., & Janse, E. (2015). Semantic processing of spoken words under cognitive load in older listeners. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Processing of semantic information in language comprehension has been suggested to be modulated by attentional resources. Consequently, cognitive load would be expected to reduce semantic priming, but studies have yielded inconsistent results. This study investigated whether cognitive load affects semantic activation in speech processing in older adults, and whether this is modulated by individual differences in cognitive and hearing abilities. Older adults participated in an auditory continuous lexical decision task in a low-load and high-load condition. The group analysis showed only a marginally significant reduction of semantic priming in the high-load condition compared to the low-load condition. The individual differences analysis showed that semantic priming was significantly reduced under increased load in participants with poorer attention-switching control. Hence, a resource-demanding secondary task may affect the integration of spoken words into a coherent semantic representation for listeners with poorer attentional skills.
  • Schmiedtová, V., & Schmiedtová, B. (2002). The color spectrum in language: The case of Czech: Cognitive concepts, new idioms and lexical meanings. In H. Gottlieb, J. Mogensen, & A. Zettersten (Eds.), Proceedings of The 10th International Symposium on Lexicography (pp. 285-292). Tübingen: Max Niemeyer Verlag.

    Abstract

    The representative corpus SYN2000 in the Czech National Corpus (CNK) project containing 100 million word forms taken from different types of texts. I have tried to determine the extent and depth of the linguistic material in the corpus. First, I chose the adjectives indicating the basic colors of the spectrum and other parts of speech (names and adverbs) derived from these adjectives. An analysis of three examples - black, white and red - shows the extent of the linguistic wealth and diversity we are looking at: because of size limitations, no existing dictionary is capable of embracing all analyzed nuances. Currently, we can only hope that the next dictionary of contemporary Czech, built on the basis of the Czech National Corpus, will be electronic. Without the size limitations, we would be able us to include many of the fine nuances of language
  • Schriefers, H., Meyer, A. S., & Levelt, W. J. M. (2002). Exploring the time course of lexical access in language production: Picture word interference studies. In G. Altmann (Ed.), Psycholinguistics: Critical Concepts in Psychology [vol. 5] (pp. 168-191). London: Routledge.
  • Schriefers, H., & Vigliocco, G. (2015). Speech Production, Psychology of [Repr.]. In J. D. Wright (Ed.), International Encyclopedia of the Social & Behavioral Sciences (2nd ed) Vol. 23 (pp. 255-258). Amsterdam: Elsevier. doi:10.1016/B978-0-08-097086-8.52022-4.

    Abstract

    This article is reproduced from the previous edition, volume 22, pp. 14879–14882, © 2001, Elsevier Ltd.
  • Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. In G. Ferré, & M. Tutton (Eds.), Proceedings of the 4th GESPIN - Gesture & Speech in Interaction Conference (pp. 211-216). Nantes: Université of Nantes.

    Abstract

    Speakers can adapt their speech and co-speech gestures for
    addressees. Here, we investigate whether this ability is
    modulated by age. Younger and older adults participated in a
    comic narration task in which one participant (the speaker)
    narrated six short comic stories to another participant (the
    addressee). One half of each story was known to both participants, the other half only to the speaker. Younger but
    not older speakers used more words and gestures when narrating novel story content as opposed to known content.
    We discuss cognitive and pragmatic explanations of these findings and relate them to theories of gesture production.
  • Schubotz, L., Oostdijk, N., & Ernestus, M. (2015). Y’know vs. you know: What phonetic reduction can tell us about pragmatic function. In S. Lestrade, P. De Swart, & L. Hogeweg (Eds.), Addenda: Artikelen voor Ad Foolen (pp. 361-380). Njimegen: Radboud University.
  • Schuerman, W. L., Nagarajan, S., & Houde, J. (2015). Changes in consonant perception driven by adaptation of vowel production to altered auditory feedback. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congresses of Phonetic Sciences (ICPhS 2015). London: International Phonetic Association.

    Abstract

    Adaptation to altered auditory feedback has been shown to induce subsequent shifts in perception. However, it is uncertain whether these perceptual changes may generalize to other speech sounds. In this experiment, we tested whether exposing the production of a vowel to altered auditory feedback affects perceptual categorization of a consonant distinction. In two sessions, participants produced CVC words containing the vowel /i/, while intermittently categorizing stimuli drawn from a continuum between "see" and "she." In the first session feedback was unaltered, while in the second session the formants of the vowel were shifted 20% towards /u/. Adaptation to the altered vowel was found to reduce the proportion of perceived /S/ stimuli. We suggest that this reflects an alteration to the sensorimotor mapping that is shared between vowels and consonants.
  • Seifart, F. (2002). El sistema de clasificación nominal del miraña. Bogotá: CCELA/Universidad de los Andes.
  • Seifart, F. (2002). Shape-distinctions picture-object matching task, with 2002 supplement. In S. Kita (Ed.), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 15-17). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Sekine, K. (2011). The development of spatial perspective in the description of large-scale environments. In G. Stam, & M. Ishino (Eds.), Integrating Gestures: The interdisciplinary nature of gesture (pp. 175-186). Amsterdam: John Benjamins Publishing Company.

    Abstract

    This research investigated developmental changes in children’s representations of large-scale environments as reflected in spontaneous gestures and speech produced during route descriptions Four-, five-, and six-year-olds (N = 122) described the route from their nursery school to their own homes. Analysis of the children’s gestures showed that some 5- and 6-year-olds produced gestures that represented survey mapping, and they were categorized as a survey group. Children who did not produce such gestures were categorized as a route group. A comparison of the two groups revealed no significant differences in speech indices, with the exception that the survey group showed significantly fewer right/left terms. As for gesture, the survey group produced more gestures than the route group. These results imply that an initial form of survey-map representation is acquired beginning at late preschool age.
  • Sekine, K., & Kajikawa, T. (2023). Does the spatial distribution of a speaker's gaze and gesture impact on a listener's comprehension of discourse? In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.), Gesture and Speech in Interaction (GeSpIn) Conference. doi:10.17617/2.3527208.

    Abstract

    This study investigated the impact of a speaker's gaze direction
    on a listener's comprehension of discourse. Previous research
    suggests that hand gestures play a role in referent allocation,
    enabling listeners to better understand the discourse. The
    current study aims to determine whether the speaker's gaze
    direction has a similar effect on reference resolution as co-
    speech gestures. Thirty native Japanese speakers participated in
    the study and were assigned to one of three conditions:
    congruent, incongruent, or speech-only. Participants watched
    36 videos of an actor narrating a story consisting of three
    sentences with two protagonists. The speaker consistently
    used hand gestures to allocate one protagonist to the lower right
    and the other to the lower left space, while directing her gaze to
    either space of the target person (congruent), the other person
    (incongruent), or no particular space (speech-only). Participants
    were required to verbally answer a question about the target
    protagonist involved in an accidental event as quickly as
    possible. Results indicate that participants in the congruent
    condition exhibited faster reaction times than those in the
    incongruent condition, although the difference was not
    significant. These findings suggest that the speaker's gaze
    direction is not enough to facilitate a listener's comprehension
    of discourse.
  • Senft, G. (2002). What should the ideal online-archive documenting linguistic data of various (endangered) languages and cultures offer to interested parties? Some ideas of a technically naive linguistic field researcher and potential user. In P. Austin, H. Dry, & P. Wittenburg (Eds.), Proceedings of the international LREC workshop on resources and tools in field linguistics (pp. 11-15). Paris: European Language Resources Association.
  • Senft, G. (1998). 'Noble Savages' and the 'Islands of Love': Trobriand Islanders in 'Popular Publications'. In J. Wassmann (Ed.), Pacific answers to Western hegemony: Cultural practices of identity construction (pp. 119-140). Oxford: Berg Publishers.
  • Senft, G. (2002). Feldforschung in einer deutschen Fabrik - oder: Trobriand ist überall. In H. Fischer (Ed.), Feldforschungen. Erfahrungsberichte zur Einführung (Neufassung) (pp. 207-226). Berlin: Reimer.
  • Senft, G. (2002). Linguistische Feldforschung. In H. M. Müller (Ed.), Arbeitsbuch Linguistik (pp. 353-363). Paderborn: Schöningh UTB.

Share this page