Publications

Displaying 101 - 200 of 513
  • Crasborn, O. A., & Zwitserlood, I. (2008). The Corpus NGT: An online corpus for professionals and laymen. In O. A. Crasborn, T. Hanke, E. Efthimiou, I. Zwitserlood, & E. Thoutenhooft (Eds.), Construction and Exploitation of Sign Language Corpora. (pp. 44-49). Paris: ELDA.

    Abstract

    The Corpus NGT is an ambitious effort to record and archive video data from Sign Language of the Netherlands (Nederlandse Gebarentaal: NGT), guaranteeing online access to all interested parties and long-term availability. Data are collected from 100 native signers of NGT of different ages and from various regions in the country. Parts of these data are annotated and/or translated; the annotations and translations are part of the corpus. The Corpus NGT is accommodated in the Browsable Corpus based at the Max Planck Institute for Psycholinguistics. In this paper we share our experiences in data collection, video processing, annotation/translation and licensing involved in building the corpus.
  • Cristia, A., Seidl, A., & Francis, A. L. (2011). Phonological features in infancy. In G. N. Clements, & R. Ridouane (Eds.), Where do phonological contrasts come from? Cognitive, physical and developmental bases of phonological features (pp. 303-326). Amsterdam: Benjamins.

    Abstract

    Features serve two main functions in the phonology of languages: they encode the distinction between pairs of contrastive phonemes (distinctive function); and they delimit sets of sounds that participate in phonological processes and patterns (classificatory function). We summarize evidence from a variety of experimental paradigms bearing on the functional relevance of phonological features. This research shows that while young infants may use abstract phonological features to learn sound patterns, this ability becomes more constrained with development and experience. Furthermore, given the lack of overlap between the ability to learn a pair of words differing in a single feature and the ability to learn sound patterns based on features, we argue for the separation of the distinctive and the classificatory function.
  • Cristia, A., & Seidl, A. (2011). Sensitivity to prosody at 6 months predicts vocabulary at 24 months. In N. Danis, K. Mesh, & H. Sung (Eds.), BUCLD 35: Proceedings of the 35th annual Boston University Conference on Language Development (pp. 145-156). Somerville, Mass: Cascadilla Press.
  • Croijmans, I., & Majid, A. (2016). Language does not explain the wine-specific memory advantage of wine experts. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 141-146). Austin, TX: Cognitive Science Society.

    Abstract

    Although people are poor at naming odors, naming a smell helps to remember that odor. Previous studies show wine experts have better memory for smells, and they also name smells differently than novices. Is wine experts’ odor memory is verbally mediated? And is the odor memory advantage that experts have over novices restricted to odors in their domain of expertise, or does it generalize? Twenty-four wine experts and 24 novices smelled wines, wine-related odors and common odors, and remembered these. Half the participants also named the smells. Wine experts had better memory for wines, but not for the other odors, indicating their memory advantage is restricted to wine. Wine experts named odors better than novices, but there was no relationship between experts’ ability to name odors and their memory for odors. This suggests experts’ odor memory advantage is not linguistically mediated, but may be the result of differential perceptual learning
  • Cutler, A., McQueen, J. M., Butterfield, S., & Norris, D. (2008). Prelexically-driven perceptual retuning of phoneme boundaries. In Proceedings of Interspeech 2008 (pp. 2056-2056).

    Abstract

    Listeners heard an ambiguous /f-s/ in nonword contexts where only one of /f/ or /s/ was legal (e.g., frul/*srul or *fnud/snud). In later categorisation of a phonetic continuum from /f/ to /s/, their category boundaries had shifted; hearing -rul led to expanded /f/ categories, -nud expanded /s/. Thus phonotactic sequence information alone induces perceptual retuning of phoneme category boundaries; lexical access is not required.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Cutler, A. (2017). Converging evidence for abstract phonological knowledge in speech processing. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 1447-1448). Austin, TX: Cognitive Science Society.

    Abstract

    The perceptual processing of speech is a constant interplay of multiple competing albeit convergent processes: acoustic input vs. higher-level representations, universal mechanisms vs. language-specific, veridical traces of speech experience vs. construction and activation of abstract representations. The present summary concerns the third of these issues. The ability to generalise across experience and to deal with resulting abstractions is the hallmark of human cognition, visible even in early infancy. In speech processing, abstract representations play a necessary role in both production and perception. New sorts of evidence are now informing our understanding of the breadth of this role.
  • Ip, M., & Cutler, A. (2016). Cross-language data on five types of prosodic focus. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 330-334).

    Abstract

    To examine the relative roles of language-specific and language-universal mechanisms in the production of prosodic focus, we compared production of five different types of focus by native speakers of English and Mandarin. Two comparable dialogues were constructed for each language, with the same words appearing in focused and unfocused position; 24 speakers recorded each dialogue in each language. Duration, F0 (mean, maximum, range), and rms-intensity (mean, maximum) of all critical word tokens were measured. Across the different types of focus, cross-language differences were observed in the degree to which English versus Mandarin speakers use the different prosodic parameters to mark focus, suggesting that while prosody may be universally available for expressing focus, the means of its employment may be considerably language-specific
  • Ip, M. H. K., & Cutler, A. (2017). Intonation facilitates prediction of focus even in the presence of lexical tones. In Proceedings of Interspeech 2017 (pp. 1218-1222). doi:10.21437/Interspeech.2017-264.

    Abstract

    In English and Dutch, listeners entrain to prosodic contours to predict where focus will fall in an utterance. However, is this strategy universally available, even in languages with different phonological systems? In a phoneme detection experiment, we examined whether prosodic entrainment is also found in Mandarin Chinese, a tone language, where in principle the use of pitch for lexical identity may take precedence over the use of pitch cues to salience. Consistent with the results from Germanic languages, response times were facilitated when preceding intonation predicted accent on the target-bearing word. Acoustic analyses revealed greater F0 range in the preceding intonation of the predicted-accent sentences. These findings have implications for how universal and language-specific mechanisms interact in the processing of salience.
  • Cutler, A. (1990). From performance to phonology: Comments on Beckman and Edwards's paper. In J. Kingston, & M. Beckman (Eds.), Papers in laboratory phonology I: Between the grammar and physics of speech (pp. 208-214). Cambridge: Cambridge University Press.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A., Andics, A., & Fang, Z. (2011). Inter-dependent categorization of voices and segments. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences [ICPhS 2011] (pp. 552-555). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Listeners performed speeded two-alternative choice between two unfamiliar and relatively similar voices or between two phonetically close segments, in VC syllables. For each decision type (segment, voice), the non-target dimension (voice, segment) either was constant, or varied across four alternatives. Responses were always slower when a non-target dimension varied than when it did not, but the effect of phonetic variation on voice identity decision was stronger than that of voice variation on phonetic identity decision. Cues to voice and segment identity in speech are processed inter-dependently, but hard categorization decisions about voices draw on, and are hence sensitive to, segmental information.
  • Cutler, A. (1990). Exploiting prosodic probabilities in speech segmentation. In G. Altmann (Ed.), Cognitive models of speech processing: Psycholinguistic and computational perspectives (pp. 105-121). Cambridge, MA: MIT Press.
  • Cutler, A., & Pearson, M. (1985). On the analysis of prosodic turn-taking cues. In C. Johns-Lewis (Ed.), Intonation in discourse (pp. 139-155). London: Croom Helm.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A. (1985). Performance measures of lexical complexity. In G. Hoppenbrouwers, P. A. Seuren, & A. Weijters (Eds.), Meaning and the lexicon (pp. 75). Dordrecht: Foris.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (1990). Syllabic lengthening as a word boundary cue. In R. Seidl (Ed.), Proceedings of the 3rd Australian International Conference on Speech Science and Technology (pp. 324-328). Canberra: Australian Speech Science and Technology Association.

    Abstract

    Bisyllabic sequences which could be interpreted as one word or two were produced in sentence contexts by a trained speaker, and syllabic durations measured. Listeners judged whether the bisyllables, excised from context, were one word or two. The proportion of two-word choices correlated positively with measured duration, but only for bisyllables stressed on the second syllable. The results may suggest a limit for listener sensitivity to syllabic lengthening as a word boundary cue.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A., Norris, D., & Van Ooijen, B. (1990). Vowels as phoneme detection targets. In Proceedings of the First International Conference on Spoken Language Processing (pp. 581-584).

    Abstract

    Phoneme detection is a psycholinguistic task in which listeners' response time to detect the presence of a pre-specified phoneme target is measured. Typically, detection tasks have used consonant targets. This paper reports two experiments in which subjects responded to vowels as phoneme detection targets. In the first experiment, targets occurred in real words, in the second in nonsense words. Response times were long by comparison with consonantal targets. Targets in initial syllables were responded to much more slowly than targets in second syllables. Strong vowels were responded to faster than reduced vowels in real words but not in nonwords. These results suggest that the process of phoneme detection produces different results for vowels and for consonants. We discuss possible explanations for this difference, in particular the possibility of language-specificity.
  • Daly, T., Chen, X. S., & Penny, D. (2011). How old are RNA networks? In L. J. Collins (Ed.), RNA infrastructure and networks (pp. 255-273). New York: Springer Science + Business Media and Landes Bioscience.

    Abstract

    Some major classes of RNAs (such as mRNA, rRNA, tRNA and RNase P) are ubiquitous in all living systems so are inferred to have arisen early during the origin of life. However, the situation is not so clear for the system of RNA regulatory networks that continue to be uncovered, especially in eukaryotes. It is increasingly being recognised that networks of small RNAs are important for regulation in all cells, but it is not certain whether the origin of these networks are as old as rRNAs and tRNA. Another group of ncRNAs, including snoRNAs, occurs mainly in archaea and eukaryotes and their ultimate origin is less certain, although perhaps the simplest hypothesis is that they were present in earlier stages of life and were lost from bacteria. Some RNA networks may trace back to an early stage when there was just RNA and proteins, the RNP‑world; before DNA.
  • Danielsen, S., Dunn, M., & Muysken, P. (2011). The spread of the Arawakan languages: A view from structural phylogenetics. In A. Hornborg, & J. D. Hill (Eds.), Ethnicity in ancient Amazonia: Reconstructing past identities from archaeology, linguistics, and ethnohistory (pp. 173-196). Boulder: University Press of Colorado.
  • Dediu, D. (2008). Causal correlations between genes and linguistic features: The mechanism of gradual language evolution. In A. D. M. Smith, K. Smith, & R. Ferrer i Cancho (Eds.), The evolution of language: Proceedings of the 7th International Conference (EVOLANG7) (pp. 83-90). Singapore: World Scientific Press.

    Abstract

    The causal correlations between human genetic variants and linguistic (typological) features could represent the mechanism required for gradual, accretionary models of language evolution. The causal link is mediated by the process of cultural transmission of language across generations in a population of genetically biased individuals. The particular case of Tone, ASPM and Microcephalin is discussed as an illustration. It is proposed that this type of genetically-influenced linguistic bias, coupled with a fundamental role for genetic and linguistic diversities, provides a better explanation for the evolution of language and linguistic universals.
  • Dediu, D., & Moisik, S. (2016). Defining and counting phonological classes in cross-linguistic segment databases. In N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2016: 10th International Conference on Language Resources and Evaluation (pp. 1955-1962). Paris: European Language Resources Association (ELRA).

    Abstract

    Recently, there has been an explosion in the availability of large, good-quality cross-linguistic databases such as WALS (Dryer & Haspelmath, 2013), Glottolog (Hammarstrom et al., 2015) and Phoible (Moran & McCloy, 2014). Databases such as Phoible contain the actual segments used by various languages as they are given in the primary language descriptions. However, this segment-level representation cannot be used directly for analyses that require generalizations over classes of segments that share theoretically interesting features. Here we present a method and the associated R (R Core Team, 2014) code that allows the exible denition of such meaningful classes and that can identify the sets of segments falling into such a class for any language inventory. The method and its results are important for those interested in exploring cross-linguistic patterns of phonetic and phonological diversity and their relationship to extra-linguistic factors and processes such as climate, economics, history or human genetics.
  • Dediu, D., & Moisik, S. R. (2016). Anatomical biasing of click learning and production: An MRI and 3d palate imaging study. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/57.html.

    Abstract

    The current paper presents results for data on click learning obtained from a larger imaging study (using MRI and 3D intraoral scanning) designed to quantify and characterize intra- and inter-population variation of vocal tract structures and the relation of this to speech production. The aim of the click study was to ascertain whether and to what extent vocal tract morphology influences (1) the ability to learn to produce clicks and (2) the productions of those that successfully learn to produce these sounds. The results indicate that the presence of an alveolar ridge certainly does not prevent an individual from learning to produce click sounds (1). However, the subtle details of how clicks are produced may indeed be driven by palate shape (2).
  • Dediu, D. (2017). From biology to language change and diversity. In N. J. Enfield (Ed.), Dependencies in language: On the causal ontology of linguistics systems (pp. 39-52). Berlin: Language Science Press.
  • Dijkstra, K., & Casasanto, D. (2008). Autobiographical memory and motor action [Abstract]. In B. C. Love, K. McRae, & V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society (pp. 1549). Austin, TX: Cognitive Science Society.

    Abstract

    Retrieval of autobiographical memories is facilitated by activation of perceptuo-motor aspects of the experience, for example a congruent body position at the time of the experiencing and the time of retelling (Dijkstra, Kaschak, & Zwaan, 2007). The present study examined whether similar retrieval facilitation occurs when the direction of motor action is congruent with the valence of emotional memories. Consistent with evidence that people mentally represent emotions spatially (Casasanto, in press), participants moved marbles between vertically stacked boxes at a higher rate when the direction of movement was congruent with the valence of the memory they retrieved (e.g., upward for positive memories, downward for negative memories) than when direction and valence were incongruent (t(22)=4.24, p<.001). In addition, valence-congruent movements facilitated access to these memories, resulting in shorter retrieval times (t(22)=2.43, p<.05). Results demonstrate bidirectional influences between the emotional content of autobiographical memories and irrelevant motor actions.
  • Dijkstra, N., & Fikkert, P. (2011). Universal constraints on the discrimination of Place of Articulation? Asymmetries in the discrimination of 'paan' and 'taan' by 6-month-old Dutch infants. In N. Danis, K. Mesh, & H. Sung (Eds.), Proceedings of the 35th Annual Boston University Conference on Language Development. Volume 1 (pp. 170-182). Somerville, MA: Cascadilla Press.
  • Dimitrova, D. V., Redeker, G., Egg, K. M. M., & Hoeks, J. C. J. (2008). Linguistic and extra-linguistic determinants of accentuation in Dutch. In P. Barbosa, & S. Madureira (Eds.), Proceedings of the 4th International Conference on Speech Prosody (pp. 409-412). ISCA Archive.

    Abstract

    In this paper we discuss the influence of semantically unexpected information on the prosodic realization of contrast.
    For this purpose, we examine the interplay between unexpectedness and various discourse factors that have been claimed to enhance the accentuation of contrastive
    information: contrast direction, syntactic status, and discourse distance. We conducted a production experiment in Dutch in which speakers described scenes consisting of moving fruits with unnatural colors. We found that a general cognitive factor such as the unexpectedness of a property has a strong impact on the intonational marking of contrast, over and above the influence of the immediate discourse context.
  • Dimitrova, D. V., Redeker, G., Egg, M., & Hoeks, J. C. (2008). Prosodic correlates of linguistic and extra-linguistic information in Dutch. In B. Love, K. McRae, & V. Sloutsky (Eds.), Proceedings of the 30th Annual Conference on the Cognitive Science Society (pp. 2191-2196). Washington: Cognitive Science Society.

    Abstract

    In this paper, we discuss the interplay of factors that influence the intonational marking of contrast in Dutch. In particular, we examine how prominence is expressed at the prosodic level when semantically abnormal information conflicts with contrastive information. For this purpose, we conducted a production experiment in Dutch in which speakers described scenes containing fruits with unnatural colors. We found that semantically abnormal information invokes cognitive prominence which corresponds to intonational prominence. Moreover, the results show that abnormality may overrule the accentual marking of information structural categories such as contrastive focus. If semantically abnormal information becomes integrated into the larger discourse context, its prosodic prominence decreases in favor of the signaling of information structural categories such as contrastive focus.
  • Dimroth, C. (2008). Perspectives on second language acquisition at different ages. In J. Philp, R. Oliver, & A. Mackey (Eds.), Second language acquisition and the younger learner: Child's play? (pp. 53-79). Amsterdam: Benjamins.

    Abstract

    Empirical studies addressing the age factor in second language acquisition have mainly been concerned with a comparison of end state data (from learners before and after the closure of a putative Critical Period for language acquisition) to the native speaker norm. Based on longitudinal corpus data, this paper investigates the affect of age on end state, rate and the process of acquisition and addresses the question of whether different grammatical domains are equally affected. To this end, the paper presents summarized findings from the acquisition of word order and inflectional morphology in L2 German by Russian learners of different ages and discusses theoretical implications that can be drawn from this evidence.
  • Dimroth, C., & Haberzettl, S. (2008). Je älter desto besser: Der Erwerb der Verbflexion in Kindesalter. In B. Ahrenholz, U. Bredel, W. Klein, M. Rost-Roth, & R. Skiba (Eds.), Empirische Forschung und Theoriebildung: Beiträge aus Soziolinguistik, Gesprochene-Sprache- und Zweitspracherwerbsforschung: Festschrift für Norbert Dittmar (pp. 227-238). Frankfurt am Main: Lang.
  • Dimroth, C. (2008). Kleine Unterschiede in den Lernvoraussetzungen beim ungesteuerten Zweitspracherwerb: Welche Bereiche der Zielsprache Deutsch sind besonders betroffen? In B. Ahrenholz (Ed.), Kinder und Migrationshintergrund: Spracherwerb und Fördermöglichkeiten (pp. 117-133). Freiburg: Fillibach.
  • Dingemanse, M. (2017). Brain-to-brain interfaces and the role of language in distributing agency. In N. J. Enfield, & P. Kockelman (Eds.), Distributed Agency (pp. 59-66). Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190457204.003.0007.

    Abstract

    Brain-to-brain interfaces, in which brains are physically connected without the intervention of language, promise new ways of collaboration and communication between humans. I examine the narrow view of language implicit in current conceptions of brain-to-brain interfaces and put forward a constructive alternative, stressing the role of language in organising joint agency. Two features of language stand out as crucial: its selectivity, which provides people with much-needed filters between public words and private worlds; and its negotiability, which provides people with systematic opportunities for calibrating understanding and expressing consent and dissent. Without these checks and balances, brain-to-brain interfaces run the risk of reducing people to the level of amoeba in a slime mold; with them, they may mature to become useful extensions of human agency
  • Dingemanse, M., Hill, C., Majid, A., & Levinson, S. C. (2008). Ethnography of the senses. In A. Majid (Ed.), Field manual volume 11 (pp. 18-28). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492935.

    Abstract

    This entry provides some orientation and task suggestions on how to explore the perceptual world of your field site and the interaction between the cultural world and the sensory lexicon in your community. The material consists of procedural texts; soundscapes; other documentary and observational tasks. The goal of this task is to explore the perceptual world of your field site and the interaction between the cultural world and the sensory lexicon in your community.
  • Dingemanse, M., Van Leeuwen, T., & Majid, A. (2011). Mapping across senses: Two cross-modal association tasks. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 11-15). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1005579.
  • Dingemanse, M. (2011). Ezra Pound among the Mawu: Ideophones and iconicity in Siwu. In P. Michelucci, O. Fischer, & C. Ljungberg (Eds.), Semblance and Signification (pp. 39-54). Amsterdam: John Benjamins.

    Abstract

    The Mawu people of eastern Ghana make common use of ideophones: marked words that depict sensory imagery. Ideophones have been described as “poetry in ordinary language,” yet the shadow of Lévy-Bruhl, who assigned such words to the realm of primitivity, has loomed large over linguistics and literary theory alike. The poet Ezra Pound is a case in point: while his fascination with Chinese characters spawned the ideogrammic method, the mimicry and gestures of the “primitive languages in Africa” were never more than a mere curiosity to him. This paper imagines Pound transposed into the linguaculture of the Mawu. What would have struck him about their ways of ‘charging language’ with imagery? I juxtapose Pound’s views of the poetic image with an analysis of how different layers of iconicity in ideophones combine to depict sensory imagery. This exercise illuminates aspects of what one might call ‘the ideophonic
  • Dingemanse, M. (2017). On the margins of language: Ideophones, interjections and dependencies in linguistic theory. In N. J. Enfield (Ed.), Dependencies in language (pp. 195-202). Berlin: Language Science Press. doi:10.5281/zenodo.573781.

    Abstract

    Linguistic discovery is viewpoint-dependent, just like our ideas about what is marginal and what is central in language. In this essay I consider two supposed marginalia —ideophones and interjections— which provide some useful pointers for widening our field of view. Ideophones challenge us to take a fresh look at language and consider how it is that our communication system combines multiple modes of representation. Interjections challenge us to extend linguistic inquiry beyond sentence level, and remind us that language is social-interactive at core. Marginalia, then, are not the obscure, exotic phenomena that can be safely ignored: they represent opportunities for innovation and invite us to keep pushing the edges of linguistic inquiry.
  • Dolscheid, S., Shayan, S., Majid, A., & Casasanto, D. (2011). The thickness of musical pitch: Psychophysical evidence for the Whorfian hypothesis. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 537-542). Austin, TX: Cognitive Science Society.
  • Doumas, L. A. A., Hamer, A., Puebla, G., & Martin, A. E. (2017). A theory of the detection and learning of structured representations of similarity and relative magnitude. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 1955-1960). Austin, TX: Cognitive Science Society.

    Abstract

    Responding to similarity, difference, and relative magnitude (SDM) is ubiquitous in the animal kingdom. However, humans seem unique in the ability to represent relative magnitude (‘more’/‘less’) and similarity (‘same’/‘different’) as abstract relations that take arguments (e.g., greater-than (x,y)). While many models use structured relational representations of magnitude and similarity, little progress has been made on how these representations arise. Models that developuse these representations assume access to computations of similarity and magnitude a priori, either encoded as features or as output of evaluation operators. We detail a mechanism for producing invariant responses to “same”, “different”, “more”, and “less” which can be exploited to compute similarity and magnitude as an evaluation operator. Using DORA (Doumas, Hummel, & Sandhofer, 2008), these invariant responses can serve be used to learn structured relational representations of relative magnitude and similarity from pixel images of simple shapes
  • Doumas, L. A., & Martin, A. E. (2016). Abstraction in time: Finding hierarchical linguistic structure in a model of relational processing. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 2279-2284). Austin, TX: Cognitive Science Society.

    Abstract

    Abstract mental representation is fundamental for human cognition. Forming such representations in time, especially from dynamic and noisy perceptual input, is a challenge for any processing modality, but perhaps none so acutely as for language processing. We show that LISA (Hummel & Holyaok, 1997) and DORA (Doumas, Hummel, & Sandhofer, 2008), models built to process and to learn structured (i.e., symbolic) rep resentations of conceptual properties and relations from unstructured inputs, show oscillatory activation during processing that is highly similar to the cortical activity elicited by the linguistic stimuli from Ding et al.(2016). We argue, as Ding et al.(2016), that this activation reflects formation of hierarchical linguistic representation, and furthermore, that the kind of computational mechanisms in LISA/DORA (e.g., temporal binding by systematic asynchrony of firing) may underlie formation of abstract linguistic representations in the human brain. It may be this repurposing that allowed for the generation or mergence of hierarchical linguistic structure, and therefore, human language, from extant cognitive and neural systems. We conclude that models of thinking and reasoning and models of language processing must be integrated —not only for increased plausiblity, but in order to advance both fields towards a larger integrative model of human cognition
  • Drozd, K. F. (1998). No as a determiner in child English: A summary of categorical evidence. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the Gala '97 Conference on Language Acquisition (pp. 34-39). Edinburgh, UK: Edinburgh University Press,.

    Abstract

    This paper summarizes the results of a descriptive syntactic category analysis of child English no which reveals that young children use and represent no as a determiner and negatives like no pen as NPs, contra standard analyses.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2016). Processing and adaptation to ambiguous sounds during the course of perceptual learning. In Proceedings of Interspeech 2016: The 17th Annual Conference of the International Speech Communication Association (pp. 2811-2815). doi:10.21437/Interspeech.2016-814.

    Abstract

    Listeners use their lexical knowledge to interpret ambiguous sounds, and retune their phonetic categories to include this ambiguous sound. Although there is ample evidence for lexically-guided retuning, the adaptation process is not fully understood. Using a lexical decision task with an embedded auditory semantic priming task, the present study investigates whether words containing an ambiguous sound are processed in the same way as “natural” words and whether adaptation to the ambiguous sound tends to equalize the processing of “ambiguous” and natural words. Analyses of the yes/no responses and reaction times to natural and “ambiguous” words showed that words containing an ambiguous sound were accepted as words less often and were processed slower than the same words without ambiguity. The difference in acceptance disappeared after exposure to approximately 15 ambiguous items. Interestingly, lower acceptance rates and slower processing did not have an effect on the processing of semantic information of the following word. However, lower acceptance rates of ambiguous primes predict slower reaction times of these primes, suggesting an important role of stimulus-specific characteristics in triggering lexically-guided perceptual learning.
  • Drude, S. (2011). Awetí in relation with Kamayurá: The two Tupian languages of the Upper Xingu. In B. Franchetto (Ed.), Alto Xingu. Uma sociedade multilíngüe (pp. 155-192). Rio de Janeiro: Museu do Indio - FUNAI.

    Abstract

    The article analyzes the relation between Aweti and Kamayurá on different levels. Both languages belong to different branches of the subfamily “Maweti-Guarani” within the large Tupi ‘stock’. Both peoples have arrived rather late to the complex Upper Xinguan society, but probably independently and from different directions. Both resulted from mergers of different groups and suffered a dramatic demographic decline in the first half of last century. There is no concrete evidence that these groups spoke varieties of more than 2 different languages (Pre-Aweti and Pre-Kamayurá). Today, many Aweti are at least passive bilinguals with Kamayurá, their most important allies, but the opposite does not hold. The article also discusses the relations between the languages on the main structural levels. In phonology, the phoneme inventories are compared and the sound changes are listed that occurred from the hypothetical proto-language “Proto-Maweti-Guarani” to Aweti, on the one hand, and to Proto-Tupi-Guarani and further to Kamayurá, on the other. In morpho-syntax, the article offers a comparison of the person systems and of affixes in general, treating in particular the so-called ‘relational prefixes’, which do not exist in Aweti. The most important syntactic shared properties are also listed. There seem to be very little mutual lexical borrowing. In the appendix, a list of more than 60 cognates with reconstructed proto-forms is given. Key-words: Aweti; Kamayurá; Sociolinguistics; History; Phonology.
  • Drude, S. (2008). Die Personenpräfixe des Guaraní und ihre lexikographische Behandlung. In W. Dietrich, & H. Symeonidis (Eds.), Geschichte und Aktualität der deutschsprachigen Guaraní-Philologie: Akten der Guaraní-Tagung in Kiel und Berlin 25.-27. Mai 2000 (pp. 198-234). Berlin: Lit Verlag.

    Abstract

    Der vorliegende Beitrag zum Kieler Symposium1 stellt die Resultate eines Teilbereichs meiner Arbeit zum Guarani vor, nämlich einen Vorschlag zur Analyse der Personenpräfixe dieser Sprache und der mit ihnen verbundenen grammatischen Kategorien. Die im Titel angedeutete lexikographische Fragestellung bedarf einer näheren Erläuterung, die ich im Zusammenhang mit einer kurzen Darstellung der Motivation für meine Untersuchungen geben will
  • Drude, S. (2011). Comparando línguas alto‐xinguanas: Metodologia e bases de dados comparativos. In B. Franchetto (Ed.), Alto Xingu. Uma sociedade multilíngüe (pp. 39-56). Rio de Janeiro: Museu do Indio - FUNAI.

    Abstract

    A key for understanding the Upper Xingu system is the comparison of the different languages which are part of that multilingual society. This article discusses the notion ‘comparing languages’ and delineates a research program in accordance to which a fruitful comparison can be done on four levels: 1) structural (phonological and morphosyntactic), 2) lexical (semantic structure of the lexica and individual lexical items), 3) discourse (figures of speech and thought), 4) content (in particular, narratives). The language data of the project gathered so far (focusing on level 2 and 4) is described in detail: 10 comparative word lists from different semantic domains, and a core of 5 analogous texts of different genera. Finally, some general considerations are offered about how to analyze both similarities and divergence found among the compared material.
  • Drude, S. (2011). 'Derivational verbs' and other multi-verb constructions in Aweti and Tupi-Guarani. In A. Y. Aikhenvald, & P. C. Muysken (Eds.), Multi-verb constructions: A view from the Americas (pp. 213-254). Leiden: Brill.
  • Drude, S. (2008). Inflectional units and their effects: The case of verbal prefixes in Guaraní. In R. Sackmann (Ed.), Explorations in integrational linguistics: Four essays on German, French, and Guaraní (pp. 153-189). Amsterdam: Benjamins.

    Abstract

    With the present essay I pursue a threefold aim as will be explained in the following paragraphs. Since I cannot expect my readers to be familiar with the language studied, Guaran´ı, more information about this language will be given in the next subsection.
  • Drude, S. (2008). Tense, aspect and mood in Awetí verb paradigms: Analytic and synthetic forms. In K. D. Harrison, D. S. Rood, & A. Dwyer (Eds.), Lessons from documented endangered languages (pp. 67-110). Amsterdam: Benjamins.

    Abstract

    This paper describes the verbal Tense-Aspect-Mood system of Awetí (Tupian, Central Brazil) in a Word-and-Paradigm approach. One classification of Awetí verb forms contains clear aspect categories. A second set of independent classifications renders at least four moods and contains a third major TAM classification, factuality, that has one mainly temporal category Future, while others are partially or wholly modal. Structural categories reflect the formal composition of the forms. Some forms are synthetic, ‘marked’ only by means of affixes, but many are analytic, containing auxiliary particles. With selected sample forms we demonstrate in detail the interplay of structural and functional categories in Awetí verb paradigms.
  • Edmiston, P., Perlman, M., & Lupyan, G. (2017). Creating words from iterated vocal imitation. In G. Gunzelman, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 331-336). Austin, TX: Cognitive Science Society.

    Abstract

    We report the results of a large-scale (N=1571) experiment to investigate whether spoken words can emerge from the process of repeated imitation. Participants played a version of the children’s game “Telephone”. The first generation was asked to imitate recognizable environmental sounds (e.g., glass breaking, water splashing); subsequent generations imitated the imitators for a total of 8 generations. We then examined whether the vocal imitations became more stable and word-like, retained a resemblance to the original sound, and became more suitable as learned category labels. The results showed (1) the imitations became progressively more word-like, (2) even after 8 generations, they could be matched above chance to the environmental sound that motivated them, and (3) imitations from later generations were more effective as learned category labels. These results show how repeated imitation can create progressively more word-like forms while retaining a semblance of iconicity.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1998). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. In Ethnologie - Humanethologische Begleitpublikationen von I. Eibl-Eibesfeldt und Mitarbeitern. Sammelband I, 1985-1987. Göttingen: Institut für den Wissenschaftlichen Film.
  • Eisner, F., & Scott, S. K. (2008). Speech and auditory processing in the cortex: Evidence from functional neuroimaging. In A. Cacace, & D. McFarland (Eds.), Controversies in central auditory processing disorder. San Diego, Ca: Plural Publishing.
  • Ellert, M., Roberts, L., & Järvikivi, J. (2011). Verarbeitung und Disambiguierung pronominaler Referenz in der Fremdsprache Deutsch: Eine psycholinguistische Studie. In A. Krafft, & C. Spiegel (Eds.), Sprachliche Förderung und Weiterbildung-Transdisziplinär (pp. 51-68). Frankfurt am Main: Peter Lang.
  • Enfield, N. J. (2008). Verbs and multi-verb construction in Lao. In A. V. Diller, J. A. Edmondson, & Y. Luo (Eds.), The Tai-Kadai languages (pp. 83-183). London: Routledge.
  • Enfield, N. J., Kendrick, K. H., De Ruiter, J. P., Stivers, T., & Levinson, S. C. (2011). Building a corpus of spontaneous interaction. In Field manual volume 14 (pp. 29-32). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1005610.

    Abstract

    This revised version supersedes all previous versions (e.g., Field Manual 2010).
  • Enfield, N. J., & Majid, A. (2008). Constructions in 'language and perception'. In A. Majid (Ed.), Field Manual Volume 11 (pp. 11-17). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492949.

    Abstract

    This field guide is for eliciting information about grammatical resources used in describing perceptual events and perception-based properties and states. A list of leading questions outlines an underlying semantic space for events/states of perception, against which language-specific constructions may be defined. It should be used as an entry point into a flexible exploration of the structures and constraints which are specific to the language you are working on. The goal is to provide a cross-linguistically comparable description of the constructions of a language used in describing perceptual events and states. The core focus is to discover any sensory asymmetries, i.e., ways in which different sensory modalities are treated differently with respect to these constructions.
  • Enfield, N. J. (2011). Description of reciprocal situations in Lao. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 129-149). Amsterdam: Benjamins.

    Abstract

    This article describes the grammatical resources available to speakers of Lao for describing situations that can be described broadly as ‘reciprocal’. The analysis is based on complementary methods: elicitation by means of non-linguistic stimuli, exploratory consultation with native speakers, and investigation of corpora of spontaneous language use. Typically, reciprocal situations are described using a semantically general ‘collaborative’ marker on an action verb. The resultant meaning is that some set of people participate in a situation ‘together’, broadly construed. The collaborative marker is found in two distinct syntactic constructions, which differ in terms of their information structural contexts of use. The paper first explores in detail the semantic range of the collaborative marker as it occurs in the more common ‘Type 1’ construction, and then discusses a special pragmatic context for the ‘Type 2’ construction. There is some methodological discussion concerning the results of elicitation via video stimuli. The chapter also discusses two specialised constructions dedicated to the expression of strict reciprocity.
  • Enfield, N. J. (2011). Dynamics of human diversity in mainland Southeast Asia: Introduction. In N. J. Enfield (Ed.), Dynamics of human diversity: The case of mainland Southeast Asia (pp. 1-8). Canberra: Pacific Linguistics.
  • Enfield, N. J. (2011). Elements of formulation. In J. Streeck, C. Goodwin, & C. LeBaron (Eds.), Embodied interaction: Language and body in the material world (pp. 59-66). Cambridge: Cambridge University Press.

    Abstract

    (from the chapter) Recognizing others' goals in the flow of interaction is complex, not only for analysts but for participants too. This chapter explores a semiotic approach, with the utterance-in-context as a basic-level unit, and where the interpreter, not the producer, is the driving force in how utterances come to have meaning. We first want to know how people extract meaning from others' communicative behavior. We then ask what are the elements of producers' formulation of communicative actions in anticipation of how others will interpret that behavior.
  • Enfield, N. J. (2008). Common ground as a resource for social affiliation. In I. Kecskes, & J. L. Mey (Eds.), Intention, common ground and the egocentric speaker-hearer (pp. 223-254). Berlin: Mouton de Gruyter.
  • Enfield, N. J. (2008). Lao linguistics in the 20th century and since. In Y. Goudineau, & M. Lorrillard (Eds.), Recherches nouvelles sur le Laos (pp. 435-452). Paris: EFEO.
  • Enfield, N. J., & Levinson, S. C. (2008). Metalanguage for speech acts. In A. Majid (Ed.), Field manual volume 11 (pp. 77-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492937.

    Abstract

    People of all cultures have some degree of concern with categorizing types of communicative social action. All languages have words with meanings like speak, say, talk, complain, curse, promise, accuse, nod, wink, point and chant. But the exact distinctions they make will differ in both quantity and quality. How is communicative social action categorised across languages and cultures? The goal of this task is to establish a basis for cross-linguistic comparison of native metalanguages for social action.
  • Enfield, N. J., & Levinson, S. C. (2011). Metalanguage for speech acts. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 33-35). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.1005611.

    Abstract

    This version is reprinted from the 2010 Field Manual
  • Enfield, N. J. (2017). Language in the Mainland Southeast Asia Area. In R. Hickey (Ed.), The Cambridge Handbook of Areal Linguistics (pp. 677-702). Cambridge: Cambridge University Press. doi:10.1017/9781107279872.026.
  • Enfield, N. J. (2011). Linguistic diversity in mainland Southeast Asia. In N. J. Enfield (Ed.), Dynamics of human diversity: The case of mainland Southeast Asia (pp. 63-80). Canberra: Pacific Linguistics.
  • Enfield, N. J., Levinson, S. C., & Stivers, T. (2008). Social action formulation: A "10-minutes" task. In A. Majid (Ed.), Field manual volume 11 (pp. 80-81). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492939.

    Abstract

    This Field Manual entry has been superceded by the 2009 version: https://doi.org/10.17617/2.883564

    Files private

    Request files
  • Enfield, N. J. (2011). Sources of asymmetry in human interaction: Enchrony, status, knowledge and agency. In T. Stivers, L. Mondada, & J. Steensig (Eds.), The morality of knowledge in conversation (pp. 285-312). Cambridge: Cambridge University Press.
  • Ernestus, M., & Baayen, R. H. (2011). Corpora and exemplars in phonology. In J. A. Goldsmith, J. Riggle, & A. C. Yu (Eds.), The handbook of phonological theory (2nd ed.) (pp. 374-400). Oxford: Wiley-Blackwell.
  • Ernestus, M. (2016). L'utilisation des corpus oraux pour la recherche en (psycho)linguistique. In M. Kilani-Schoch, C. Surcouf, & A. Xanthos (Eds.), Nouvelles technologies et standards méthodologiques en linguistique (pp. 65-93). Lausanne: Université de Lausanne.
  • Ernestus, M. (2011). Gradience and categoricality in phonological theory. In M. Van Oostendorp, C. J. Ewen, E. Hume, & K. Rice (Eds.), The Blackwell companion to phonology (pp. 2115-2136). Wiley-Blackwell.
  • Eryilmaz, K., Little, H., & De Boer, B. (2016). Using HMMs To Attribute Structure To Artificial Languages. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/125.html.

    Abstract

    We investigated the use of Hidden Markov Models (HMMs) as a way of representing repertoires of continuous signals in order to infer their building blocks. We tested the idea on a dataset from an artificial language experiment. The study demonstrates using HMMs for this purpose is viable, but also that there is a lot of room for refinement such as explicit duration modeling, incorporation of autoregressive elements and relaxing the Markovian assumption, in order to accommodate specific details.
  • Evans, N., Levinson, S. C., Gaby, A., & Majid, A. (2011). Introduction: Reciprocals and semantic typology. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 1-28). Amsterdam: Benjamins.

    Abstract

    Reciprocity lies at the heart of social cognition, and with it so does the encoding of reciprocity in language via reciprocal constructions. Despite the prominence of strong universal claims about the semantics of reciprocal constructions, there is considerable descriptive literature on the semantics of reciprocals that seems to indicate variable coding and subtle cross-linguistic differences in meaning of reciprocals, both of which would make it impossible to formulate a single, essentialising definition of reciprocal semantics. These problems make it vital for studies in the semantic typology of reciprocals to employ methodologies that allow the relevant categories to emerge objectively from cross-linguistic comparison of standardised stimulus materials. We situate the rationale for the 20-language study that forms the basis for this book within this empirical approach to semantic typology, and summarise some of the findings.

    Files private

    Request files
  • Fikkert, P., & Chen, A. (2011). The role of word-stress and intonation in word recognition in Dutch 14- and 24-month-olds. In N. Danis, K. Mesh, & H. Sung (Eds.), Proceedings of the 35th annual Boston University Conference on Language Development (pp. 222-232). Somerville, MA: Cascadilla Press.
  • Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S., Pašukonis, A., Hoeschele, M., Ocklenburg, S., de Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2016). Humans Recognize Vocal Expressions Of Emotional States Universally Across Species. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/91.html.

    Abstract

    The perception of danger in the environment can induce physiological responses (such as a heightened state of arousal) in animals, which may cause measurable changes in the prosodic modulation of the voice (Briefer, 2012). The ability to interpret the prosodic features of animal calls as an indicator of emotional arousal may have provided the first hominins with an adaptive advantage, enabling, for instance, the recognition of a threat in the surroundings. This ability might have paved the ability to process meaningful prosodic modulations in the emerging linguistic utterances.
  • Filippi, P., Ocklenburg, S., Bowling, D. L., Heege, L., Newen, A., Güntürkün, O., & de Boer, B. (2016). Multimodal Processing Of Emotional Meanings: A Hypothesis On The Adaptive Value Of Prosody. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/90.html.

    Abstract

    Humans combine multiple sources of information to comprehend meanings. These sources can be characterized as linguistic (i.e., lexical units and/or sentences) or paralinguistic (e.g. body posture, facial expression, voice intonation, pragmatic context). Emotion communication is a special case in which linguistic and paralinguistic dimensions can simultaneously denote the same, or multiple incongruous referential meanings. Think, for instance, about when someone says “I’m sad!”, but does so with happy intonation and a happy facial expression. Here, the communicative channels express very specific (although conflicting) emotional states as denotations. In such cases of intermodal incongruence, are we involuntarily biased to respond to information in one channel over the other? We hypothesize that humans are involuntary biased to respond to prosody over verbal content and facial expression, since the ability to communicate socially relevant information such as basic emotional states through prosodic modulation of the voice might have provided early hominins with an adaptive advantage that preceded the emergence of segmental speech (Darwin 1871; Mithen, 2005). To address this hypothesis, we examined the interaction between multiple communicative channels in recruiting attentional resources, within a Stroop interference task (i.e. a task in which different channels give conflicting information; Stroop, 1935). In experiment 1, we used synonyms of “happy” and “sad” spoken with happy and sad prosody. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody (Word task) or vice versa (Prosody task). Participants responded faster and more accurately in the Prosody task. Within the Word task, incongruent stimuli were responded to more slowly and less accurately than congruent stimuli. In experiment 2, we adopted synonyms of “happy” and “sad” spoken in happy and sad prosody, while a happy or sad face was displayed. Participants were asked to identify the emotion expressed by the verbal content while ignoring prosody and face (Word task), to identify the emotion expressed by prosody while ignoring verbal content and face (Prosody task), or to identify the emotion expressed by the face while ignoring prosody and verbal content (Face task). Participants responded faster in the Face task and less accurately when the two non-focused channels were expressing an emotion that was incongruent with the focused one, as compared with the condition where all the channels were congruent. In addition, in the Word task, accuracy was lower when prosody was incongruent to verbal content and face, as compared with the condition where all the channels were congruent. Our data suggest that prosody interferes with emotion word processing, eliciting automatic responses even when conflicting with both verbal content and facial expressions at the same time. In contrast, although processed significantly faster than prosody and verbal content, faces alone are not sufficient to interfere in emotion processing within a three-dimensional Stroop task. Our findings align with the hypothesis that the ability to communicate emotions through prosodic modulation of the voice – which seems to be dominant over verbal content - is evolutionary older than the emergence of segmental articulation (Mithen, 2005; Fitch, 2010). This hypothesis fits with quantitative data suggesting that prosody has a vital role in the perception of well-formed words (Johnson & Jusczyk, 2001), in the ability to map sounds to referential meanings (Filippi et al., 2014), and in syntactic disambiguation (Soderstrom et al., 2003). This research could complement studies on iconic communication within visual and auditory domains, providing new insights for models of language evolution. Further work aimed at how emotional cues from different modalities are simultaneously integrated will improve our understanding of how humans interpret multimodal emotional meanings in real life interactions.
  • Fisher, S. E. (2016). A molecular genetic perspective on speech and language. In G. Hickok, & S. Small (Eds.), Neurobiology of Language (pp. 13-24). Amsterdam: Elsevier. doi:10.1016/B978-0-12-407794-2.00002-X.

    Abstract

    The rise of genomic technologies has yielded exciting new routes for studying the biological foundations of language. Researchers have begun to identify genes implicated in neurodevelopmental disorders that disrupt speech and language skills. This chapter illustrates how such work can provide powerful entry points into the critical neural pathways using FOXP2 as an example. Rare mutations of this gene cause problems with learning to sequence mouth movements during speech, accompanied by wide-ranging impairments in language production and comprehension. FOXP2 encodes a regulatory protein, a hub in a network of other genes, several of which have also been associated with language-related impairments. Versions of FOXP2 are found in similar form in many vertebrate species; indeed, studies of animals and birds suggest conserved roles in the development and plasticity of certain sets of neural circuits. Thus, the contributions of this gene to human speech and language involve modifications of evolutionarily ancient functions.
  • Fisher, V. J. (2017). Dance as Embodied Analogy: Designing an Empirical Research Study. In M. Van Delft, J. Voets, Z. Gündüz, H. Koolen, & L. Wijers (Eds.), Danswetenschap in Nederland. Utrecht: Vereniging voor Dansonderzoek (VDO).
  • Fitz, H., Chang, F., & Christansen, M. H. (2011). A connectionist account of the acquisition and processing of relative clauses. In E. Kidd (Ed.), The acquisition of relative clauses. Processing, typology and function (pp. 39-60). Amsterdam: Benjamins.

    Abstract

    Relative clause processing depends on the grammatical role of the head noun in the subordinate clause. This has traditionally been explained in terms of cognitive limitations. We suggest that structure-related processing differences arise from differences in experience with these structures. We present a connectionist model which learns to produce utterances with relative clauses from exposure to message-sentence pairs. The model shows how various factors such as frequent subsequences, structural variations, and meaning conspire to create differences in the processing of these structures. The predictions of this learning-based account have been confirmed in behavioral studies with adults. This work shows that structural regularities that govern relative clause processing can be explained within a usage-based approach to recursion.
  • Fitz, H. (2011). A liquid-state model of variability effects in learning nonadjacent dependencies. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 897-902). Austin, TX: Cognitive Science Society.

    Abstract

    Language acquisition involves learning nonadjacent dependencies that can obtain between words in a sentence. Several artificial grammar learning studies have shown that the ability of adults and children to detect dependencies between A and B in frames AXB is influenced by the amount of variation in the X element. This paper presents a model of statistical learning which displays similar behavior on this task and generalizes in a human-like way. The model was also used to predict human behavior for increased distance and more variation in dependencies. We compare our model-based approach with the standard invariance account of the variability effect.
  • Fitz, H., & Chang, F. (2008). The role of the input in a connectionist model of the accessibility hierarchy in development. In H. Chan, H. Jacob, & E. Kapia (Eds.), Proceedings from the 32nd Annual Boston University Conference on Language Development [BUCLD 32] (pp. 120-131). Somerville, Mass.: Cascadilla Press.
  • Floyd, S. (2016). Insubordination in Interaction: The Cha’palaa counter-assertive. In N. Evans, & H. Wananabe (Eds.), Dynamics of Insubordination (pp. 341-366). Amsterdam: John Benjamins.

    Abstract

    In the Cha’palaa language of Ecuador the main-clause use of the otherwise non-finite morpheme -ba can be accounted for by a specific interactive practice: the ‘counter-assertion’ of statement or implicature of a previous conversational turn. Attention to the ways in which different constructions are deployed in such recurrent conversational contexts reveals a plausible account for how this type of dependent clause has come to be one of the options for finite clauses. After giving some background on Cha’palaa and placing ba clauses within a larger ecology of insubordination constructions in the language, this chapter uses examples from a video corpus of informal conversation to illustrate how interactive data provides answers that may otherwise be elusive for understanding how the different grammatical options for Cha’palaa finite verb constructions have been structured by insubordination
  • Floyd, S., & Bruil, M. (2011). Interactional functions as part of the grammar: The suffix –ba in Cha’palaa. In P. K. Austin, O. Bond, D. Nathan, & L. Marten (Eds.), Proceedings of the 3rd Conference on Language Description and Theory (pp. 91-100). London: SOAS.
  • Floyd, S. (2017). Requesting as a means for negotiating distributed agency. In N. J. Enfield, & P. Kockelman (Eds.), Distributed Agency (pp. 67-78). Oxford: Oxford University Press.
  • Floyd, S., & Norcliffe, E. (2016). Switch reference systems in the Barbacoan languages and their neighbors. In R. Van Gijn, & J. Hammond (Eds.), Switch Reference 2.0 (pp. 207-230). Amsterdam: Benjamins.

    Abstract

    This chapter surveys the available data on Barbacoan languages and their neighbors to explore a case study of switch reference within a single language family and in a situation of areal contact. To the extent possible given the available data, we weigh accounts appealing to common inheritance and areal convergence to ask what combination of factors led to the current state of these languages. We discuss the areal distribution of switch reference systems in the northwest Andean region, the different types of systems and degrees of complexity observed, and scenarios of contact and convergence, particularly in the case of Barbacoan and Ecuadorian Quechua. We then covers each of the Barbacoan languages’ systems (with the exception of Totoró, represented by its close relative Guambiano), identifying limited formal cognates, primarily between closely-related Tsafiki and Cha’palaa, as well as broader functional similarities, particularly in terms of interactions with topic/focus markers. n accounts for the current state of affairs with a complex scenario of areal prevalence of switch reference combined with deep structural family inheritance and formal re-structuring of the systems over time
  • Franken, M. K., Eisner, F., Schoffelen, J.-M., Acheson, D. J., Hagoort, P., & McQueen, J. M. (2017). Audiovisual recalibration of vowel categories. In Proceedings of Interspeech 2017 (pp. 655-658). doi:10.21437/Interspeech.2017-122.

    Abstract

    One of the most daunting tasks of a listener is to map a
    continuous auditory stream onto known speech sound
    categories and lexical items. A major issue with this mapping
    problem is the variability in the acoustic realizations of sound
    categories, both within and across speakers. Past research has
    suggested listeners may use visual information (e.g., lipreading)
    to calibrate these speech categories to the current
    speaker. Previous studies have focused on audiovisual
    recalibration of consonant categories. The present study
    explores whether vowel categorization, which is known to show
    less sharply defined category boundaries, also benefit from
    visual cues.
    Participants were exposed to videos of a speaker
    pronouncing one out of two vowels, paired with audio that was
    ambiguous between the two vowels. After exposure, it was
    found that participants had recalibrated their vowel categories.
    In addition, individual variability in audiovisual recalibration is
    discussed. It is suggested that listeners’ category sharpness may
    be related to the weight they assign to visual information in
    audiovisual speech perception. Specifically, listeners with less
    sharp categories assign more weight to visual information
    during audiovisual speech recognition.
  • Frost, R. L. A., Monaghan, P., & Christiansen, M. H. (2016). Using Statistics to Learn Words and Grammatical Categories: How High Frequency Words Assist Language Acquisition. In A. Papafragou, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 81-86). Austin, Tx: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2016/papers/0027/index.html.

    Abstract

    Recent studies suggest that high-frequency words may benefit speech segmentation (Bortfeld, Morgan, Golinkoff, & Rathbun, 2005) and grammatical categorisation (Monaghan, Christiansen, & Chater, 2007). To date, these tasks have been examined separately, but not together. We familiarised adults with continuous speech comprising repetitions of target words, and compared learning to a language in which targets appeared alongside high-frequency marker words. Marker words reliably preceded targets, and distinguished them into two otherwise unidentifiable categories. Participants completed a 2AFC segmentation test, and a similarity judgement categorisation test. We tested transfer to a word-picture mapping task, where words from each category were used either consistently or inconsistently to label actions/objects. Participants segmented the speech successfully, but only demonstrated effective categorisation when speech contained high-frequency marker words. The advantage of marker words extended to the early stages of the transfer task. Findings indicate the same high-frequency words may assist speech segmentation and grammatical categorisation.
  • De La Fuente, J., Casasanto, D., Román, A., & Santiago, J. (2011). Searching for cultural influences on the body-specific association of preferred hand and emotional valence. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 2616-2620). Austin, TX: Cognitive Science Society.
  • Fusaroli, R., Tylén, K., Garly, K., Steensig, J., Christiansen, M. H., & Dingemanse, M. (2017). Measures and mechanisms of common ground: Backchannels, conversational repair, and interactive alignment in free and task-oriented social interactions. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 2055-2060). Austin, TX: Cognitive Science Society.

    Abstract

    A crucial aspect of everyday conversational interactions is our ability to establish and maintain common ground. Understanding the relevant mechanisms involved in such social coordination remains an important challenge for cognitive science. While common ground is often discussed in very general terms, different contexts of interaction are likely to afford different coordination mechanisms. In this paper, we investigate the presence and relation of three mechanisms of social coordination – backchannels, interactive alignment and conversational repair – across free and task-oriented conversations. We find significant differences: task-oriented conversations involve higher presence of repair – restricted offers in particular – and backchannel, as well as a reduced level of lexical and syntactic alignment. We find that restricted repair is associated with lexical alignment and open repair with backchannels. Our findings highlight the need to explicitly assess several mechanisms at once and to investigate diverse activities to understand their role and relations.
  • Galke, L., Mai, F., Schelten, A., Brunch, D., & Scherp, A. (2017). Using titles vs. full-text as source for automated semantic document annotation. In O. Corcho, K. Janowicz, G. Rizz, I. Tiddi, & D. Garijo (Eds.), Proceedings of the 9th International Conference on Knowledge Capture (K-CAP 2017). New York: ACM.

    Abstract

    We conduct the first systematic comparison of automated semantic
    annotation based on either the full-text or only on the title metadata
    of documents. Apart from the prominent text classification baselines
    kNN and SVM, we also compare recent techniques of Learning
    to Rank and neural networks and revisit the traditional methods
    logistic regression, Rocchio, and Naive Bayes. Across three of our
    four datasets, the performance of the classifications using only titles
    reaches over 90% of the quality compared to the performance when
    using the full-text.
  • Galke, L., Saleh, A., & Scherp, A. (2017). Word embeddings for practical information retrieval. In M. Eibl, & M. Gaedke (Eds.), INFORMATIK 2017 (pp. 2155-2167). Bonn: Gesellschaft für Informatik. doi:10.18420/in2017_215.

    Abstract

    We assess the suitability of word embeddings for practical information retrieval scenarios. Thus, we assume that users issue ad-hoc short queries where we return the first twenty retrieved documents after applying a boolean matching operation between the query and the documents. We compare the performance of several techniques that leverage word embeddings in the retrieval models to compute the similarity between the query and the documents, namely word centroid similarity, paragraph vectors, Word Mover’s distance, as well as our novel inverse document frequency (IDF) re-weighted word centroid similarity. We evaluate the performance using the ranking metrics mean average precision, mean reciprocal rank, and normalized discounted cumulative gain. Additionally, we inspect the retrieval models’ sensitivity to document length by using either only the title or the full-text of the documents for the retrieval task. We conclude that word centroid similarity is the best competitor to state-of-the-art retrieval models. It can be further improved by re-weighting the word frequencies with IDF before aggregating the respective word vectors of the embedding. The proposed cosine similarity of IDF re-weighted word vectors is competitive to the TF-IDF baseline and even outperforms it in case of the news domain with a relative percentage of 15%.
  • Gannon, E., He, J., Gao, X., & Chaparro, B. (2016). RSVP Reading on a Smart Watch. In Proceedings of the Human Factors and Ergonomics Society 2016 Annual Meeting (pp. 1130-1134).

    Abstract

    Reading with Rapid Serial Visual Presentation (RSVP) has shown promise for optimizing screen space and increasing reading speed without compromising comprehension. Given the wide use of small-screen devices, the present study compared RSVP and traditional reading on three types of reading comprehension, reading speed, and subjective measures on a smart watch. Results confirm previous studies that show faster reading speed with RSVP without detracting from comprehension. Subjective data indicate that Traditional is strongly preferred to RSVP as a primary reading method. Given the optimal use of screen space, increased speed and comparable comprehension, future studies should focus on making RSVP a more comfortable format.
  • García Lecumberri, M. L., Cooke, M., Cutugno, F., Giurgiu, M., Meyer, B. T., Scharenborg, O., Van Dommelen, W., & Volin, J. (2008). The non-native consonant challenge for European languages. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1781-1784). ISCA Archive.

    Abstract

    This paper reports on a multilingual investigation into the effects of different masker types on native and non-native perception in a VCV consonant recognition task. Native listeners outperformed 7 other language groups, but all groups showed a similar ranking of maskers. Strong first language (L1) interference was observed, both from the sound system and from the L1 orthography. Universal acoustic-perceptual tendencies are also at work in both native and non-native sound identifications in noise. The effect of linguistic distance, however, was less clear: in large multilingual studies, listener variables may overpower other factors.
  • Gerwien, J., & Flecken, M. (2016). First things first? Top-down influences on event apprehension. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 2633-2638). Austin, TX: Cognitive Science Society.

    Abstract

    Not much is known about event apprehension, the earliest stage of information processing in elicited language production studies, using pictorial stimuli. A reason for our lack of knowledge on this process is that apprehension happens very rapidly (<350 ms after stimulus onset, Griffin & Bock 2000), making it difficult to measure the process directly. To broaden our understanding of apprehension, we analyzed landing positions and onset latencies of first fixations on visual stimuli (pictures of real-world events) given short stimulus presentation times, presupposing that the first fixation directly results from information processing during apprehension
  • Gillespie, K., & San Roque, L. (2011). Music and language in Duna pikono. In A. Rumsey, & D. Niles (Eds.), Sung tales from the Papua New Guinea Highlands: Studies in form, meaning and sociocultural context (pp. 49-63). Canberra: ANU E Press.
  • Gordon, P. C., Lowder, M. W., & Hoedemaker, R. S. (2016). Reading in normally aging adults. In H. Wright (Ed.), Cognitive-Linguistic Processes and Aging (pp. 165-192). Amsterdam: Benjamins. doi:10.1075/z.200.07gor.

    Abstract

    The activity of reading raises fundamental theoretical and practical questions about healthy cognitive aging. Reading relies greatly on knowledge of patterns of language and of meaning at the level of words and topics of text. Further, this knowledge must be rapidly accessed so that it can be coordinated with processes of perception, attention, memory and motor control that sustain skilled reading at rates of four-to-five words a second. As such, reading depends both on crystallized semantic intelligence which grows or is maintained through healthy aging, and on components of fluid intelligence which decline with age. Reading is important to older adults because it facilitates completion of everyday tasks that are essential to independent living. In addition, it entails the kind of active mental engagement that can preserve and deepen the cognitive reserve that may mitigate the negative consequences of age-related changes in the brain. This chapter reviews research on the front end of reading (word recognition) and on the back end of reading (text memory) because both of these abilities are surprisingly robust to declines associated with cognitive aging. For word recognition, that robustness is surprising because rapid processing of the sort found in reading is usually impaired by aging; for text memory, it is surprising because other types of episodic memory performance (e.g., paired associates) substantially decline in aging. These two otherwise quite different levels of reading comprehension remain robust because they draw on the knowledge of language that older adults gain through a life-time of experience with language.
  • Goudbeek, M., Smits, R., Cutler, A., & Swingley, D. (2017). Auditory and phonetic category formation. In H. Cohen, & C. Lefebvre (Eds.), Handbook of categorization in cognitive science (2nd revised ed.) (pp. 687-708). Amsterdam: Elsevier.
  • Le Guen, O., Senft, G., & Sicoli, M. A. (2008). Language of perception: Views from anthropology. In A. Majid (Ed.), Field Manual Volume 11 (pp. 29-36). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.446079.

    Abstract

    To understand the underlying principles of categorisation and classification of sensory input semantic analyses must be based on both language and culture. The senses are not only physiological phenomena, but they are also linguistic, cultural, and social. The goal of this task is to explore and describe sociocultural patterns relating language of perception, ideologies of perception, and perceptual practice in our speech communities.
  • Gullberg, M. (2008). A helping hand? Gestures, L2 learners, and grammar. In S. G. McCafferty, & G. Stam (Eds.), Gesture: Second language acquisition and classroom research (pp. 185-210). New York: Routledge.

    Abstract

    This chapter explores what L2 learners' gestures reveal about L2 grammar. The focus is on learners’ difficulties with maintaining reference in discourse caused by their incomplete mastery of pronouns. The study highlights the systematic parallels between properties of L2 speech and gesture, and the parallel effects of grammatical development in both modalities. The validity of a communicative account of interlanguage grammar in this domain is tested by taking the cohesive properties of the gesture-speech ensemble into account. Specifically, I investigate whether learners use gestures to compensate for and to license over-explicit reference in speech. The results rule out a communicative account for the spoken variety of maintained reference. In contrast, cohesive gestures are found to be multi-functional. While the presence of cohesive gestures is not communicatively motivated, their spatial realisation is. It is suggested that gestures are exploited as a grammatical communication strategy to disambiguate speech wherever possible, but that they may also be doing speaker-internal work. The methodological importance of considering L2 gestures when studying grammar is also discussed.
  • Gullberg, M., & Indefrey, P. (2008). Cognitive and neural prerequisites for time in language: Any answers? In P. Indefrey, & M. Gullberg (Eds.), Time to speak: Cognitive and neural prerequisites for time in language (pp. 207-216). Oxford: Blackwell.
  • Gullberg, M. (2008). Gestures and second language acquisition. In P. Robinson, & N. C. Ellis (Eds.), Handbook of cognitive linguistics and second language acquisition (pp. 276-305). New York: Routledge.

    Abstract

    Gestures, the symbolic movements speakers perform while they speak, are systematically related to speech and language at multiple levels, and reflect cognitive and linguistic activities in non-trivial ways. This chapter presents an overview of what gestures can tell us about the processes of second language acquisition. It focuses on two key aspects, (a) gestures and the developing language system and (b) gestures and learning, and discusses some implications of an expanded view of language acquisition that takes gestures into account.

Share this page