Publications

Displaying 101 - 200 of 274
  • Harmon, Z., & Kapatsinski, V. (2016). Fuse to be used: A weak cue’s guide to attracting attention. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Austin, TX: Cognitive Science Society (pp. 520-525). Austin, TX: Cognitive Science Society.

    Abstract

    Several studies examined cue competition in human learning by testing learners on a combination of conflicting cues rooting for different outcomes, with each cue perfectly predicting its outcome. A common result has been that learners faced with cue conflict choose the outcome associated with the rare cue (the Inverse Base Rate Effect, IBRE). Here, we investigate cue competition including IBRE with sentences containing cues to meanings in a visual world. We do not observe IBRE. Instead we find that position in the sentence strongly influences cue salience. Faced with conflict between an initial cue and a non-initial cue, learners choose the outcome associated with the initial cue, whether frequent or rare. However, a frequent configuration of non-initial cues that are not sufficiently salient on their own can overcome a competing salient initial cue rooting for a different meaning. This provides a possible explanation for certain recurring patterns in language change.
  • Harmon, Z., & Kapatsinski, V. (2016). Determinants of lengths of repetition disfluencies: Probabilistic syntactic constituency in speech production. In R. Burkholder, C. Cisneros, E. R. Coppess, J. Grove, E. A. Hanink, H. McMahan, C. Meyer, N. Pavlou, Ö. Sarıgül, A. R. Singerman, & A. Zhang (Eds.), Proceedings of the Fiftieth Annual Meeting of the Chicago Linguistic Society (pp. 237-248). Chicago: Chicago Linguistic Society.
  • Hartsuiker, R. J., Huettig, F., & Olivers, C. N. (Eds.). (2011). Visual search and visual world: Interactions among visual attention, language, and working memory [Special Issue]. Acta Psychologica, 137(2). doi:10.1016/j.actpsy.2011.01.005.
  • Hendricks, I., Lefever, E., Croijmans, I., Majid, A., & Van den Bosch, A. (2016). Very quaffable and great fun: Applying NLP to wine reviews. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics: Vol 2 (pp. 306-312). Stroudsburg, PA: Association for Computational Linguistics.

    Abstract

    We automatically predict properties of
    wines on the basis of smell and flavor de-
    scriptions from experts’ wine reviews. We
    show wine experts are capable of describ-
    ing their smell and flavor experiences in
    wine reviews in a sufficiently consistent
    manner, such that we can use their descrip-
    tions to predict properties of a wine based
    solely on language. The experimental re-
    sults show promising F-scores when using
    lexical and semantic information to predict
    the color, grape variety, country of origin,
    and price of a wine. This demonstrates,
    contrary to popular opinion, that wine ex-
    perts’ reviews really are informative.
  • Hintz, F., & Scharenborg, O. (2016). Neighbourhood density influences word recognition in native and non-native speech recognition in noise. In H. Van den Heuvel, B. Cranen, & S. Mattys (Eds.), Proceedings of the Speech Processing in Realistic Environments (SPIRE) workshop (pp. 46-47). Groningen.
  • Hintz, F., & Scharenborg, O. (2016). The effect of background noise on the activation of phonological and semantic information during spoken-word recognition. In Proceedings of Interspeech 2016: The 17th Annual Conference of the International Speech Communication Association (pp. 2816-2820).

    Abstract

    During spoken-word recognition, listeners experience phonological competition between multiple word candidates, which increases, relative to optimal listening conditions, when speech is masked by noise. Moreover, listeners activate semantic word knowledge during the word’s unfolding. Here, we replicated the effect of background noise on phonological competition and investigated to which extent noise affects the activation of semantic information in phonological competitors. Participants’ eye movements were recorded when they listened to sentences containing a target word and looked at three types of displays. The displays either contained a picture of the target word, or a picture of a phonological onset competitor, or a picture of a word semantically related to the onset competitor, each along with three unrelated distractors. The analyses revealed that, in noise, fixations to the target and to the phonological onset competitor were delayed and smaller in magnitude compared to the clean listening condition, most likely reflecting enhanced phonological competition. No evidence for the activation of semantic information in the phonological competitors was observed in noise and, surprisingly, also not in the clear. We discuss the implications of the lack of an effect and differences between the present and earlier studies.
  • Holler, J., Tutton, M., & Wilkin, K. (2011). Co-speech gestures in the process of meaning coordination. In Proceedings of the 2nd GESPIN - Gesture & Speech in Interaction Conference, Bielefeld, 5-7 Sep 2011.

    Abstract

    This study uses a classical referential communication task to
    investigate the role of co-speech gestures in the process of
    coordination. The study manipulates both the common ground between the interlocutors, as well as the visibility of the gestures they use. The findings show that co-speech gestures are an integral part of the referential utterances speakers
    produced with regard to both initial references as well as repeated references, and that the availability of gestures appears to impact on interlocutors’ referential oordination. The results are discussed with regard to past research on
    common ground as well as theories of gesture production.
  • Irivine, E., & Roberts, S. G. (2016). Deictic tools can limit the emergence of referential symbol systems. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/99.html.

    Abstract

    Previous experiments and models show that the pressure to communicate can lead to the emergence of symbols in specific tasks. The experiment presented here suggests that the ability to use deictic gestures can reduce the pressure for symbols to emerge in co-operative tasks. In the 'gesture-only' condition, pairs built a structure together in 'Minecraft', and could only communicate using a small range of gestures. In the 'gesture-plus' condition, pairs could also use sound to develop a symbol system if they wished. All pairs were taught a pointing convention. None of the pairs we tested developed a symbol system, and performance was no different across the two conditions. We therefore suggest that deictic gestures, and non-referential means of organising activity sequences, are often sufficient for communication. This suggests that the emergence of linguistic symbols in early hominids may have been late and patchy with symbols only emerging in contexts where they could significantly improve task success or efficiency. Given the communicative power of pointing however, these contexts may be fewer than usually supposed. An approach for identifying these situations is outlined.
  • Janssen, R., Winter, B., Dediu, D., Moisik, S. R., & Roberts, S. G. (2016). Nonlinear biases in articulation constrain the design space of language. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/86.html.

    Abstract

    In Iterated Learning (IL) experiments, a participant’s learned output serves as the next participant’s learning input (Kirby et al., 2014). IL can be used to model cultural transmission and has indicated that weak biases can be amplified through repeated cultural transmission (Kirby et al., 2007). So, for example, structural language properties can emerge over time because languages come to reflect the cognitive constraints in the individuals that learn and produce the language. Similarly, we propose that languages may also reflect certain anatomical biases. Do sound systems adapt to the affordances of the articulation space induced by the vocal tract?
    The human vocal tract has inherent nonlinearities which might derive from acoustics and aerodynamics (cf. quantal theory, see Stevens, 1989) or biomechanics (cf. Gick & Moisik, 2015). For instance, moving the tongue anteriorly along the hard palate to produce a fricative does not result in large changes in acoustics in most cases, but for a small range there is an abrupt change from a perceived palato-alveolar [ʃ] to alveolar [s] sound (Perkell, 2012). Nonlinearities such as these might bias all human speakers to converge on a very limited set of phonetic categories, and might even be a basis for combinatoriality or phonemic ‘universals’.
    While IL typically uses discrete symbols, Verhoef et al. (2014) have used slide whistles to produce a continuous signal. We conducted an IL experiment with human subjects who communicated using a digital slide whistle for which the degree of nonlinearity is controlled. A single parameter (α) changes the mapping from slide whistle position (the ‘articulator’) to the acoustics. With α=0, the position of the slide whistle maps Bark-linearly to the acoustics. As α approaches 1, the mapping gets more double-sigmoidal, creating three plateaus where large ranges of positions map to similar frequencies. In more abstract terms, α represents the strength of a nonlinear (anatomical) bias in the vocal tract.
    Six chains (138 participants) of dyads were tested, each chain with a different, fixed α. Participants had to communicate four meanings by producing a continuous signal using the slide-whistle in a ‘director-matcher’ game, alternating roles (cf. Garrod et al., 2007).
    Results show that for high αs, subjects quickly converged on the plateaus. This quick convergence is indicative of a strong bias, repelling subjects away from unstable regions already within-subject. Furthermore, high αs lead to the emergence of signals that oscillate between two (out of three) plateaus. Because the sigmoidal spaces are spatially constrained, participants increasingly used the sequential/temporal dimension. As a result of this, the average duration of signals with high α was ~100ms longer than with low α. These oscillations could be an expression of a basis for phonemic combinatoriality.
    We have shown that it is possible to manipulate the magnitude of an articulator-induced non-linear bias in a slide whistle IL framework. The results suggest that anatomical biases might indeed constrain the design space of language. In particular, the signaling systems in our study quickly converged (within-subject) on the use of stable regions. While these conclusions were drawn from experiments using slide whistles with a relatively strong bias, weaker biases could possibly be amplified over time by repeated cultural transmission, and likely lead to similar outcomes.
  • Janssen, R., Dediu, D., & Moisik, S. R. (2016). Simple agents are able to replicate speech sounds using 3d vocal tract model. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/97.html.

    Abstract

    Many factors have been proposed to explain why groups of people use different speech sounds in their language. These range from cultural, cognitive, environmental (e.g., Everett, et al., 2015) to anatomical (e.g., vocal tract (VT) morphology). How could such anatomical properties have led to the similarities and differences in speech sound distributions between human languages?

    It is known that hard palate profile variation can induce different articulatory strategies in speakers (e.g., Brunner et al., 2009). That is, different hard palate profiles might induce a kind of bias on speech sound production, easing some types of sounds while impeding others. With a population of speakers (with a proportion of individuals) that share certain anatomical properties, even subtle VT biases might become expressed at a population-level (through e.g., bias amplification, Kirby et al., 2007). However, before we look into population-level effects, we should first look at within-individual anatomical factors. For that, we have developed a computer-simulated analogue for a human speaker: an agent. Our agent is designed to replicate speech sounds using a production and cognition module in a computationally tractable manner.

    Previous agent models have often used more abstract (e.g., symbolic) signals. (e.g., Kirby et al., 2007). We have equipped our agent with a three-dimensional model of the VT (the production module, based on Birkholz, 2005) to which we made numerous adjustments. Specifically, we used a 4th-order Bezier curve that is able to capture hard palate variation on the mid-sagittal plane (XXX, 2015). Using an evolutionary algorithm, we were able to fit the model to human hard palate MRI tracings, yielding high accuracy fits and using as little as two parameters. Finally, we show that the samples map well-dispersed to the parameter-space, demonstrating that the model cannot generate unrealistic profiles. We can thus use this procedure to import palate measurements into our agent’s production module to investigate the effects on acoustics. We can also exaggerate/introduce novel biases.

    Our agent is able to control the VT model using the cognition module.

    Previous research has focused on detailed neurocomputation (e.g., Kröger et al., 2014) that highlights e.g., neurobiological principles or speech recognition performance. However, the brain is not the focus of our current study. Furthermore, present-day computing throughput likely does not allow for large-scale deployment of these architectures, as required by the population model we are developing. Thus, the question whether a very simple cognition module is able to replicate sounds in a computationally tractable manner, and even generalize over novel stimuli, is one worthy of attention in its own right.

    Our agent’s cognition module is based on running an evolutionary algorithm on a large population of feed-forward neural networks (NNs). As such, (anatomical) bias strength can be thought of as an attractor basin area within the parameter-space the agent has to explore. The NN we used consists of a triple-layered (fully-connected), directed graph. The input layer (three neurons) receives the formants frequencies of a target-sound. The output layer (12 neurons) projects to the articulators in the production module. A hidden layer (seven neurons) enables the network to deal with nonlinear dependencies. The Euclidean distance (first three formants) between target and replication is used as fitness measure. Results show that sound replication is indeed possible, with Euclidean distance quickly approaching a close-to-zero asymptote.

    Statistical analysis should reveal if the agent can also: a) Generalize: Can it replicate sounds not exposed to during learning? b) Replicate consistently: Do different, isolated agents always converge on the same sounds? c) Deal with consolidation: Can it still learn new sounds after an extended learning phase (‘infancy’) has been terminated? Finally, a comparison with more complex models will be used to demonstrate robustness.
  • Janzen, G., & Weststeijn, C. (2004). Neural representation of object location and route direction: An fMRI study. NeuroImage, 22(Supplement 1), e634-e635.
  • Janzen, G., & Van Turennout, M. (2004). Neuronale Markierung navigationsrelevanter Objekte im räumlichen Gedächtnis: Ein fMRT Experiment. In D. Kerzel (Ed.), Beiträge zur 46. Tagung experimentell arbeitender Psychologen (pp. 125-125). Lengerich: Pabst Science Publishers.
  • Jasmin, K., & Casasanto, D. (2010). Stereotyping: How the QWERTY keyboard shapes the mental lexicon [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 159). York: University of York.
  • Jasmin, K., & Casasanto, D. (2011). The QWERTY effect: How stereo-typing shapes the mental lexicon. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society.
  • Jeske, J., Kember, H., & Cutler, A. (2016). Native and non-native English speakers' use of prosody to predict sentence endings. In Proceedings of the 16th Australasian International Conference on Speech Science and Technology (SST2016).
  • Jesse, A., Reinisch, E., & Nygaard, L. C. (2010). Learning of adjectival word meaning through tone of voice [Abstract]. Journal of the Acoustical Society of America, 128, 2475.

    Abstract

    Speakers express word meaning through systematic but non-canonical acoustic variation of tone of voice (ToV), i.e., variation of speaking rate, pitch, vocal effort, or loudness. Words are, for example, pronounced at a higher pitch when referring to small than to big referents. In the present study, we examined whether listeners can use ToV to learn the meaning of novel adjectives (e.g., “blicket”). During training, participants heard sentences such as “Can you find the blicket one?” spoken with ToV representing hot-cold, strong-weak, and big-small. Participants’ eye movements to two simultaneously shown objects with properties representing the relevant two endpoints (e.g., an elephant and an ant for big-small) were monitored. Assignment of novel adjectives to endpoints was counterbalanced across participants. During test, participants heard the sentences spoken with a neutral ToV, while seeing old or novel picture pairs varying along the same dimensions (e.g., a truck and a car for big-small). Participants had to click on the adjective’s referent. As evident from eye movements, participants did not infer the intended meaning during first exposure, but learned the meaning with the help of ToV during training. At test listeners applied this knowledge to old and novel items even in the absence of informative ToV.
  • Jesse, A., & Mitterer, H. (2011). Pointing gestures do not influence the perception of lexical stress. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 2445-2448).

    Abstract

    We investigated whether seeing a pointing gesture influences the perceived lexical stress. A pitch contour continuum between the Dutch words “CAnon” (‘canon’) and “kaNON” (‘cannon’) was presented along with a pointing gesture during the first or the second syllable. Pointing gestures following natural recordings but not Gaussian functions influenced stress perception (Experiment 1 and 2), especially when auditory context preceded (Experiment 2). This was not replicated in Experiment 3. Natural pointing gestures failed to affect the categorization of a pitch peak timing continuum (Experiment 4). There is thus no convincing evidence that seeing a pointing gesture influences lexical stress perception.
  • Johns, T. G., Perera, R. M., Vitali, A. A., Vernes, S. C., & Scott, A. (2004). Phosphorylation of a glioma-specific mutation of the EGFR [Abstract]. Neuro-Oncology, 6, 317.

    Abstract

    Mutations of the epidermal growth factor receptor (EGFR) gene are found at a relatively high frequency in glioma, with the most common being the de2-7 EGFR (or EGFRvIII). This mutation arises from an in-frame deletion of exons 2-7, which removes 267 amino acids from the extracellular domain of the receptor. Despite being unable to bind ligand, the de2-7 EGFR is constitutively active at a low level. Transfection of human glioma cells with the de2-7 EGFR has little effect in vitro, but when grown as tumor xenografts this mutated receptor imparts a dramatic growth advantage. We mapped the phosphorylation pattern of de2-7 EGFR, both in vivo and in vitro, using a panel of antibodies specific for different phosphorylated tyrosine residues. Phosphorylation of de2-7 EGFR was detected constitutively at all tyrosine sites surveyed in vitro and in vivo, including tyrosine 845, a known target in the wild-type EGFR for src kinase. There was a substantial upregulation of phosphorylation at every yrosine residue of the de2-7 EGFR when cells were grown in vivo compared to the receptor isolated from cells cultured in vitro. Upregulation of phosphorylation at tyrosine 845 could be stimulated in vitro by the addition of specific components of the ECM via an integrindependent mechanism. These observations may partially explain why the growth enhancement mediated by de2-7 EGFR is largely restricted to the in vivo environment
  • Junge, C., Hagoort, P., Kooijman, V., & Cutler, A. (2010). Brain potentials for word segmentation at seven months predict later language development. In K. Franich, K. M. Iserman, & L. L. Keil (Eds.), Proceedings of the 34th Annual Boston University Conference on Language Development. Volume 1 (pp. 209-220). Somerville, MA: Cascadilla Press.
  • Junge, C., Cutler, A., & Hagoort, P. (2010). Ability to segment words from speech as a precursor of later language development: Insights from electrophysiological responses in the infant brain. In M. Burgess, J. Davey, C. Don, & T. McMinn (Eds.), Proceedings of 20th International Congress on Acoustics, ICA 2010. Incorporating Proceedings of the 2010 annual conference of the Australian Acoustical Society (pp. 3727-3732). Australian Acoustical Society, NSW Division.
  • Kember, H., Choi, J., & Cutler, A. (2016). Processing advantages for focused words in Korean. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 702-705).

    Abstract

    In Korean, focus is expressed in accentual phrasing. To ascertain whether words focused in this manner enjoy a processing advantage analogous to that conferred by focus as expressed in, e.g, English and Dutch, we devised sentences with target words in one of four conditions: prosodic focus, syntactic focus, prosodic + syntactic focus, and no focus as a control. 32 native speakers of Korean listened to blocks of 10 sentences, then were presented visually with words and asked whether or not they had heard them. Overall, words with focus were recognised significantly faster and more accurately than unfocused words. In addition, words with syntactic focus or syntactic + prosodic focus were recognised faster than words with prosodic focus alone. As for other languages, Korean focus confers processing advantage on the words carrying it. While prosodic focus does provide an advantage, however, syntactic focus appears to provide the greater beneficial effect for recognition memory
  • Kempen, G. (1997). De ontdubbelde taalgebruiker: Maken taalproductie en taalperceptie gebruik van één en dezelfde syntactische processor? [Abstract]. In 6e Winter Congres NvP. Programma and abstracts (pp. 31-32). Nederlandse Vereniging voor Psychonomie.
  • Kempen, G., Kooij, A., & Van Leeuwen, T. (1997). Do skilled readers exploit inflectional spelling cues that do not mirror pronunciation? An eye movement study of morpho-syntactic parsing in Dutch. In Abstracts of the Orthography Workshop "What spelling changes". Nijmegen: Max Planck Institute for Psycholinguistics.
  • Kempen, G., & Harbusch, K. (1998). A 'tree adjoining' grammar without adjoining: The case of scrambling in German. In Fourth International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+4).
  • Kempen, G., & Harbusch, K. (2004). How flexible is constituent order in the midfield of German subordinate clauses? A corpus study revealing unexpected rigidity. In S. Kepser, & M. Reis (Eds.), Pre-Proceedings of the International Conference on Linguistic Evidence (pp. 81-85). Tübingen: Niemeyer.
  • Kempen, G. (2004). Interactive visualization of syntactic structure assembly for grammar-intensive first- and second-language instruction. In R. Delmonte, P. Delcloque, & S. Tonelli (Eds.), Proceedings of InSTIL/ICALL2004 Symposium on NLP and speech technologies in advanced language learning systems (pp. 183-186). Venice: University of Venice.
  • Kempen, G., & Harbusch, K. (2004). How flexible is constituent order in the midfield of German subordinate clauses?: A corpus study revealing unexpected rigidity. In Proceedings of the International Conference on Linguistic Evidence (pp. 81-85). Tübingen: University of Tübingen.
  • Kempen, G. (2004). Human grammatical coding: Shared structure formation resources for grammatical encoding and decoding. In Cuny 2004 - The 17th Annual CUNY Conference on Human Sentence Processing. March 25-27, 2004. University of Maryland (pp. 66).
  • Kemps-Snijders, M., Koller, T., Sloetjes, H., & Verweij, H. (2010). LAT bridge: Bridging tools for annotation and exploration of rich linguistic data. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 2648-2651). European Language Resources Association (ELRA).

    Abstract

    We present a software module, the LAT Bridge, which enables bidirectionalcommunication between the annotation and exploration tools developed at the MaxPlanck Institute for Psycholinguistics as part of our Language ArchivingTechnology (LAT) tool suite. These existing annotation and exploration toolsenable the annotation, enrichment, exploration and archive management oflinguistic resources. The user community has expressed the desire to usedifferent combinations of LAT tools in conjunction with each other. The LATBridge is designed to cater for a number of basic data interaction scenariosbetween the LAT annotation and exploration tools. These interaction scenarios(e.g. bootstrapping a wordlist, searching for annotation examples or lexicalentries) have been identified in collaboration with researchers at ourinstitute.We had to take into account that the LAT tools for annotation and explorationrepresent a heterogeneous application scenario with desktop-installed andweb-based tools. Additionally, the LAT Bridge has to work in situations wherethe Internet is not available or only in an unreliable manner (i.e. with a slowconnection or with frequent interruptions). As a result, the LAT Bridge’sarchitecture supports both online and offline communication between the LATannotation and exploration tools.
  • Khetarpal, N., Majid, A., Malt, B. C., Sloman, S., & Regier, T. (2010). Similarity judgments reflect both language and cross-language tendencies: Evidence from two semantic domains. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 358-363). Austin, TX: Cognitive Science Society.

    Abstract

    Many theories hold that semantic variation in the world’s languages can be explained in terms of a universal conceptual space that is partitioned differently by different languages. Recent work has supported this view in the semantic domain of containers (Malt et al., 1999), and assumed it in the domain of spatial relations (Khetarpal et al., 2009), based in both cases on similarity judgments derived from pile-sorting of stimuli. Here, we reanalyze data from these two studies and find a more complex picture than these earlier studies suggested. In both cases we find that sorting is similar across speakers of different languages (in line with the earlier studies), but nonetheless reflects the sorter’s native language (in contrast with the earlier studies). We conclude that there are cross-culturally shared conceptual tendencies that can be revealed by pile-sorting, but that these tendencies may be modulated to some extent by language. We discuss the implications of these findings for accounts of semantic variation.
  • Kita, S., Ozyurek, A., Allen, S., & Ishizuka, T. (2010). Early links between iconic gestures and sound symbolic words: Evidence for multimodal protolanguage. In A. D. Smith, M. Schouwstra, B. de Boer, & K. Smith (Eds.), Proceedings of the 8th International conference on the Evolution of Language (EVOLANG 8) (pp. 429-430). Singapore: World Scientific.
  • Kita, S., van Gijn, I., & van der Hulst, H. (1998). Movement phases in signs and co-speech gestures, and their transcription by human coders. In Gesture and Sign-Language in Human-Computer Interaction (Lecture Notes in Artificial Intelligence - LNCS Subseries, Vol. 1371) (pp. 23-35). Berlin, Germany: Springer-Verlag.

    Abstract

    The previous literature has suggested that the hand movement in co-speech gestures and signs consists of a series of phases with qualitatively different dynamic characteristics. In this paper, we propose a syntagmatic rule system for movement phases that applies to both co-speech gestures and signs. Descriptive criteria for the rule system were developed for the analysis video-recorded continuous production of signs and gesture. It involves segmenting a stream of body movement into phases and identifying different phase types. Two human coders used the criteria to analyze signs and cospeech gestures that are produced in natural discourse. It was found that the criteria yielded good inter-coder reliability. These criteria can be used for the technology of automatic recognition of signs and co-speech gestures in order to segment continuous production and identify the potentially meaningbearing phase.
  • Klein, W. (Ed.). (2004). Philologie auf neuen Wegen [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 136.
  • Klein, W. (Ed.). (2004). Universitas [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik (LiLi), 134.
  • Klein, W., & Winkler, S. (Eds.). (2010). Ambiguität [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 40(158).
  • Klein, W. (Ed.). (1989). Kindersprache [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (73).
  • Klein, W. (Ed.). (1998). Kaleidoskop [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (112).
  • Klein, W. (Ed.). (1997). Technologischer Wandel in den Philologien [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (106).
  • Klein, W., & Meibauer, J. (Eds.). (2011). Spracherwerb und Kinderliteratur [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 162.
  • Klein, W. (Ed.). (1985). Schriftlichkeit [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (59).
  • Koster, M., & Cutler, A. (1997). Segmental and suprasegmental contributions to spoken-word recognition in Dutch. In Proceedings of EUROSPEECH 97 (pp. 2167-2170). Grenoble, France: ESCA.

    Abstract

    Words can be distinguished by segmental differences or by suprasegmental differences or both. Studies from English suggest that suprasegmentals play little role in human spoken-word recognition; English stress, however, is nearly always unambiguously coded in segmental structure (vowel quality); this relationship is less close in Dutch. The present study directly compared the effects of segmental and suprasegmental mispronunciation on word recognition in Dutch. There was a strong effect of suprasegmental mispronunciation, suggesting that Dutch listeners do exploit suprasegmental information in word recognition. Previous findings indicating the effects of mis-stressing for Dutch differ with stress position were replicated only when segmental change was involved, suggesting that this is an effect of segmental rather than suprasegmental processing.
  • Kung, C., Chwilla, D. J., Gussenhoven, C., Bögels, S., & Schriefers, H. (2010). What did you say just now, bitterness or wife? An ERP study on the interaction between tone, intonation and context in Cantonese Chinese. In Proceedings of Speech Prosody 2010 (pp. 1-4).

    Abstract

    Previous studies on Cantonese Chinese showed that rising
    question intonation contours on low-toned words lead to
    frequent misperceptions of the tones. Here we explored the
    processing consequences of this interaction between tone and
    intonation by comparing the processing and identification of
    monosyllabic critical words at the end of questions and
    statements, using a tone identification task, and ERPs as an
    online measure of speech comprehension. Experiment 1
    yielded higher error rates for the identification of low tones at
    the end of questions and a larger N400-P600 pattern, reflecting
    processing difficulty and reanalysis, compared to other
    conditions. In Experiment 2, we investigated the effect of
    immediate lexical context on the tone by intonation interaction.
    Increasing contextual constraints led to a reduction in errors
    and the disappearance of the P600 effect. These results
    indicate that there is an immediate interaction between tone,
    intonation, and context in online speech comprehension. The
    difference in performance and activation patterns between the
    two experiments highlights the significance of context in
    understanding a tone language, like Cantonese-Chinese.
  • Lai, V. T., Hagoort, P., & Casasanto, D. (2011). Affective and non-affective meaning in words and pictures. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 390-395). Austin, TX: Cognitive Science Society.
  • Lai, J., & Poletiek, F. H. (2010). The impact of starting small on the learnability of recursion. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32rd Annual Conference of the Cognitive Science Society (CogSci 2010) (pp. 1387-1392). Austin, TX, USA: Cognitive Science Society.
  • Lecumberri, M. L. G., Cooke, M., & Cutler, A. (Eds.). (2010). Non-native speech perception in adverse conditions [Special Issue]. Speech Communication, 52(11/12).
  • Lenkiewicz, P., Wittenburg, P., Schreer, O., Masneri, S., Schneider, D., & Tschöpel, S. (2011). Application of audio and video processing methods for language research. In Proceedings of the conference Supporting Digital Humanities 2011 [SDH 2011], Copenhagen, Denmark, November 17-18, 2011.

    Abstract

    Annotations of media recordings are the grounds for linguistic research. Since creating those annotations is a very laborious task, reaching 100 times longer than the length of the annotated media, innovative audio and video processing algorithms are needed, in order to improve the efficiency and quality of annotation process. The AVATecH project, started by the Max-Planck Institute for Psycholinguistics (MPI) and the Fraunhofer institutes HHI and IAIS, aims at significantly speeding up the process of creating annotations of audio-visual data for humanities research. In order for this to be achieved a range of state-of-the-art audio and video pattern recognition algorithms have been developed and integrated into widely used ELAN annotation tool. To address the problem of heterogeneous annotation tasks and recordings we provide modular components extended by adaptation and feedback mechanisms to achieve competitive annotation quality within significantly less annotation time.
  • Lenkiewicz, P., Wittenburg, P., Gebre, B. G., Lenkiewicz, A., Schreer, O., & Masneri, S. (2011). Application of video processing methods for linguistic research. In Z. Vetulani (Ed.), Human language technologies as a challenge for computer science and linguistics. Proceedings of the 5th Language and Technology Conference (LTC 2011), November 25-27, 2011, Poznań, Poland (pp. 561-564).

    Abstract

    Evolution and changes of all modern languages is a well-known fact. However, recently it is reaching dynamics never seen before, which results in loss of the vast amount of information encoded in every language. In order to preserve such heritage, properly annotated recordings of world languages are necessary. Since creating those annotations is a very laborious task, reaching times 100 longer than the length of the annotated media, innovative video processing algorithms are needed, in order to improve the efficiency and quality of annotation process.
  • Lenkiewicz, P., Pereira, M., Freire, M., & Fernandes, J. (2011). Extended whole mesh deformation model: Full 3D processing. In Proceedings of the 2011 IEEE International Conference on Image Processing (pp. 1633-1636).

    Abstract

    Processing medical data has always been an interesting field that has shown the need for effective image segmentation methods. Modern medical image segmentation solutions are focused on 3D image volumes, which originate at advanced acquisition devices. Operating on such data in a 3D envi- ronment is essential in order to take the full advantage of the available information. In this paper we present an extended version of our 3D image segmentation and reconstruction model that belongs to the family of Deformable Models and is capable of processing large image volumes in competitive times and in fully 3D environment, offering a big level of automation of the process and a high precision of results. It is also capable of handling topology changes and offers a very good scalability on multi-processing unit architectures. We present a description of the model and show its capabilities in the field of medical image processing.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M., & Plomp, K. (1968). The appreciation of musical intervals. In J. M. M. Aler (Ed.), Proceedings of the fifth International Congress of Aesthetics, Amsterdam 1964 (pp. 901-904). The Hague: Mouton.
  • Little, H., Eryılmaz, K., & De Boer, B. (2016). Emergence of signal structure: Effects of duration constraints. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/25.html.

    Abstract

    Recent work has investigated the emergence of structure in speech using experiments which use artificial continuous signals. Some experiments have had no limit on the duration which signals can have (e.g. Verhoef et al., 2014), and others have had time limitations (e.g. Verhoef et al., 2015). However, the effect of time constraints on the structure in signals has never been experimentally investigated.
  • Little, H., & de Boer, B. (2016). Did the pressure for discrimination trigger the emergence of combinatorial structure? In Proceedings of the 2nd Conference of the International Association for Cognitive Semiotics (pp. 109-110).
  • Little, H., Eryılmaz, K., & De Boer, B. (2016). Differing signal-meaning dimensionalities facilitates the emergence of structure. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/25.html.

    Abstract

    Structure of language is not only caused by cognitive processes, but also by physical aspects of the signalling modality. We test the assumptions surrounding the role which the physical aspects of the signal space will have on the emergence of structure in speech. Here, we use a signal creation task to test whether a signal space and a meaning space having similar dimensionalities will generate an iconic system with signal-meaning mapping and whether, when the topologies differ, the emergence of non-iconic structure is facilitated. In our experiments, signals are created using infrared sensors which use hand position to create audio signals. We find that people take advantage of signal-meaning mappings where possible. Further, we use trajectory probabilities and measures of variance to show that when there is a dimensionality mismatch, more structural strategies are used.
  • Little, H. (2016). Nahran Bhannamz: Language Evolution in an Online Zombie Apocalypse Game. In Createvolang: creativity and innovation in language evolution.
  • Lockwood, G., Hagoort, P., & Dingemanse, M. (2016). Synthesized Size-Sound Sound Symbolism. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1823-1828). Austin, TX: Cognitive Science Society.

    Abstract

    Studies of sound symbolism have shown that people can associate sound and meaning in consistent ways when presented with maximally contrastive stimulus pairs of nonwords such as bouba/kiki (rounded/sharp) or mil/mal (small/big). Recent work has shown the effect extends to antonymic words from natural languages and has proposed a role for shared cross-modal correspondences in biasing form-to-meaning associations. An important open question is how the associations work, and particularly what the role is of sound-symbolic matches versus mismatches. We report on a learning task designed to distinguish between three existing theories by using a spectrum of sound-symbolically matching, mismatching, and neutral (neither matching nor mismatching) stimuli. Synthesized stimuli allow us to control for prosody, and the inclusion of a neutral condition allows a direct test of competing accounts. We find evidence for a sound-symbolic match boost, but not for a mismatch difficulty compared to the neutral condition.
  • Macuch Silva, V., & Roberts, S. G. (2016). Language adapts to signal disruption in interaction. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/20.html.

    Abstract

    Linguistic traits are often seen as reflecting cognitive biases and constraints (e.g. Christiansen & Chater, 2008). However, language must also adapt to properties of the channel through which communication between individuals occurs. Perhaps the most basic aspect of any communication channel is noise. Communicative signals can be blocked, degraded or distorted by other sources in the environment. This poses a fundamental problem for communication. On average, channel disruption accompanies problems in conversation every 3 minutes (27% of cases of other-initiated repair, Dingemanse et al., 2015). Linguistic signals must adapt to this harsh environment. While modern language structures are robust to noise (e.g. Piantadosi et al., 2011), we investigate how noise might have shaped the early emergence of structure in language. The obvious adaptation to noise is redundancy. Signals which are maximally different from competitors are harder to render ambiguous by noise. Redundancy can be increased by adding differentiating segments to each signal (increasing the diversity of segments). However, this makes each signal more complex and harder to learn. Under this strategy, holistic languages may emerge. Another strategy is reduplication - repeating parts of the signal so that noise is less likely to disrupt all of the crucial information. This strategy does not increase the difficulty of learning the language - there is only one extra rule which applies to all signals. Therefore, under pressures for learnability, expressivity and redundancy, reduplicated signals are expected to emerge. However, reduplication is not a pervasive feature of words (though it does occur in limited domains like plurals or iconic meanings). We suggest that this is due to the pressure for redundancy being lifted by conversational infrastructure for repair. Receivers can request that senders repeat signals only after a problem occurs. That is, robustness is achieved by repeating the signal across conversational turns (when needed) instead of within single utterances. As a proof of concept, we ran two iterated learning chains with pairs of individuals in generations learning and using an artificial language (e.g. Kirby et al., 2015). The meaning space was a structured collection of unfamiliar images (3 shapes x 2 textures x 2 outline types). The initial language for each chain was the same written, unstructured, fully expressive language. Signals produced in each generation formed the training language for the next generation. Within each generation, pairs played an interactive communication game. The director was given a target meaning to describe, and typed a word for the matcher, who guessed the target meaning from a set. With a 50% probability, a contiguous section of 3-5 characters in the typed word was replaced by ‘noise’ characters (#). In one chain, the matcher could initiate repair by requesting that the director type and send another signal. Parallel generations across chains were matched for the number of signals sent (if repair was initiated for a meaning, then it was presented twice in the parallel generation where repair was not possible) and noise (a signal for a given meaning which was affected by noise in one generation was affected by the same amount of noise in the parallel generation). For the final set of signals produced in each generation we measured the signal redundancy (the zip compressibility of the signals), the character diversity (entropy of the characters of the signals) and systematic structure (z-score of the correlation between signal edit distance and meaning hamming distance). In the condition without repair, redundancy increased with each generation (r=0.97, p=0.01), and the character diversity decreased (r=-0.99,p=0.001) which is consistent with reduplication, as shown below (part of the initial and the final language): Linear regressions revealed that generations with repair had higher overall systematic structure (main effect of condition, t = 2.5, p < 0.05), increasing character diversity (interaction between condition and generation, t = 3.9, p = 0.01) and redundancy increased at a slower rate (interaction between condition and generation, t = -2.5, p < 0.05). That is, the ability to repair counteracts the pressure from noise, and facilitates the emergence of compositional structure. Therefore, just as systems to repair damage to DNA replication are vital for the evolution of biological species (O’Brien, 2006), conversational repair may regulate replication of linguistic forms in the cultural evolution of language. Future studies should further investigate how evolving linguistic structure is shaped by interaction pressures, drawing on experimental methods and naturalistic studies of emerging languages, both spoken (e.g Botha, 2006; Roberge, 2008) and signed (e.g Senghas, Kita, & Ozyurek, 2004; Sandler et al., 2005).
  • Majid, A., Van Staden, M., & Enfield, N. J. (2004). The human body in cognition, brain, and typology. In K. Hovie (Ed.), Forum Handbook, 4th International Forum on Language, Brain, and Cognition - Cognition, Brain, and Typology: Toward a Synthesis (pp. 31-35). Sendai: Tohoku University.

    Abstract

    The human body is unique: it is both an object of perception and the source of human experience. Its universality makes it a perfect resource for asking questions about how cognition, brain and typology relate to one another. For example, we can ask how speakers of different languages segment and categorize the human body. A dominant view is that body parts are “given” by visual perceptual discontinuities, and that words are merely labels for these visually determined parts (e.g., Andersen, 1978; Brown, 1976; Lakoff, 1987). However, there are problems with this view. First it ignores other perceptual information, such as somatosensory and motoric representations. By looking at the neural representations of sesnsory representations, we can test how much of the categorization of the human body can be done through perception alone. Second, we can look at language typology to see how much universality and variation there is in body-part categories. A comparison of a range of typologically, genetically and areally diverse languages shows that the perceptual view has only limited applicability (Majid, Enfield & van Staden, in press). For example, using a “coloring-in” task, where speakers of seven different languages were given a line drawing of a human body and asked to color in various body parts, Majid & van Staden (in prep) show that languages vary substantially in body part segmentation. For example, Jahai (Mon-Khmer) makes a lexical distinction between upper arm, lower arm, and hand, but Lavukaleve (Papuan Isolate) has just one word to refer to arm, hand, and leg. This shows that body part categorization is not a straightforward mapping of words to visually determined perceptual parts.
  • Majid, A., Van Staden, M., Boster, J. S., & Bowerman, M. (2004). Event categorization: A cross-linguistic perspective. In K. Forbus, D. Gentner, & T. Tegier (Eds.), Proceedings of the 26th Annual Meeting of the Cognitive Science Society (pp. 885-890). Mahwah, NJ: Erlbaum.

    Abstract

    Many studies in cognitive science address how people categorize objects, but there has been comparatively little research on event categorization. This study investigated the categorization of events involving material destruction, such as “cutting” and “breaking”. Speakers of 28 typologically, genetically, and areally diverse languages described events shown in a set of video-clips. There was considerable cross-linguistic agreement in the dimensions along which the events were distinguished, but there was variation in the number of categories and the placement of their boundaries.
  • Majid, A., & Levinson, S. C. (Eds.). (2011). The senses in language and culture [Special Issue]. The Senses & Society, 6(1).
  • Majid, A., & Levinson, S. C. (2011). The language of perception across cultures [Abstract]. Abstracts of the XXth Congress of European Chemoreception Research Organization, ECRO-2010. Publ. in Chemical Senses, 36(1), E7-E8.

    Abstract

    How are the senses structured by the languages we speak, the cultures we inhabit? To what extent is the encoding of perceptual experiences in languages a matter of how the mind/brain is ―wired-up‖ and to what extent is it a question of local cultural preoccupation? The ―Language of Perception‖ project tests the hypothesis that some perceptual domains may be more ―ineffable‖ – i.e. difficult or impossible to put into words – than others. While cognitive scientists have assumed that proximate senses (olfaction, taste, touch) are more ineffable than distal senses (vision, hearing), anthropologists have illustrated the exquisite variation and elaboration the senses achieve in different cultural milieus. The project is designed to test whether the proximate senses are universally ineffable – suggesting an architectural constraint on cognition – or whether they are just accidentally so in Indo-European languages, so expanding the role of cultural interests and preoccupations. To address this question, a standardized set of stimuli of color patches, geometric shapes, simple sounds, tactile textures, smells and tastes have been used to elicit descriptions from speakers of more than twenty languages—including three sign languages. The languages are typologically, genetically and geographically diverse, representing a wide-range of cultures. The communities sampled vary in subsistence modes (hunter-gatherer to industrial), ecological zones (rainforest jungle to desert), dwelling types (rural and urban), and various other parameters. We examine how codable the different sensory modalities are by comparing how consistent speakers are in how they describe the materials in each modality. Our current analyses suggest that taste may, in fact, be the most codable sensorial domain across languages. Moreover, we have identified exquisite elaboration in the olfactory domains in some cultural settings, contrary to some contemporary predictions within the cognitive sciences. These results suggest that differential codability may be at least partly the result of cultural preoccupation. This shows that the senses are not just physiological phenomena but are constructed through linguistic, cultural and social practices.
  • Malt, B. C., Ameel, E., Gennari, S., Imai, M., Saji, N., & Majid, A. (2011). Do words reveal concepts? In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 519-524). Austin, TX: Cognitive Science Society.

    Abstract

    To study concepts, cognitive scientists must first identify some. The prevailing assumption is that they are revealed by words such as triangle, table, and robin. But languages vary dramatically in how they carve up the world by name. Either ordinary concepts must be heavily language-dependent or names cannot be a direct route to concepts. We asked English, Dutch, Spanish, and Japanese speakers to name videos of human locomotion and judge their similarities. We investigated what name inventories and scaling solutions on name similarity and on physical similarity for the groups individually and together suggest about the underlying concepts. Aggregated naming and similarity solutions converged on results distinct from the answers suggested by the word inventories and scaling solutions of any single language. Words such as triangle, table, and robin can help identify the conceptual space of a domain, but they do not directly reveal units of knowledge usefully considered 'concepts'.
  • de Marneffe, M.-C., Tomlinson, J. J., Tice, M., & Sumner, M. (2011). The interaction of lexical frequency and phonetic variation in the perception of accented speech. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 3575-3580). Austin, TX: Cognitive Science Society.

    Abstract

    How listeners understand spoken words despite massive variation in the speech signal is a central issue for linguistic theory. A recent focus on lexical frequency and specificity has proved fruitful in accounting for this phenomenon. Speech perception, though, is a multi-faceted process and likely incorporates a number of mechanisms to map a variable signal to meaning. We examine a well-established language use factor — lexical frequency — and how this factor is integrated with phonetic variability during the perception of accented speech. We show that an integrated perspective highlights a low-level perceptual mechanism that accounts for the perception of accented speech absent native contrasts, while shedding light on the use of interactive language factors in the perception of spoken words.
  • Matsuo, A. (2004). Young children's understanding of ongoing vs. completion in present and perfective participles. In J. v. Kampen, & S. Baauw (Eds.), Proceedings of GALA 2003 (pp. 305-316). Utrecht: Netherlands Graduate School of Linguistics (LOT).
  • Mazzone, M., & Campisi, E. (2010). Embodiment, metafore, comunicazione. In G. P. Storari, & E. Gola (Eds.), Forme e formalizzazioni. Atti del XVI congresso nazionale. Cagliari: CUEC.
  • Mazzone, M., & Campisi, E. (2010). Are there communicative intentions? In L. A. Pérez Miranda, & A. I. Madariaga (Eds.), Advances in cognitive science. IWCogSc-10. Proceedings of the ILCLI International Workshop on Cognitive Science Workshop on Cognitive Science (pp. 307-322). Bilbao, Spain: The University of the Basque Country.

    Abstract

    Grice in pragmatics and Levelt in psycholinguistics have proposed models of human communication where the starting point of communicative action is an individual intention. This assumption, though, has to face serious objections with regard to the alleged existence of explicit representations of the communicative goals to be pursued. Here evidence is surveyed which shows that in fact speaking may ordinarily be a quite automatic activity prompted by contextual cues and driven by behavioural schemata abstracted away from social regularities. On the one hand, this means that there could exist no intentions in the sense of explicit representations of communicative goals, following from deliberate reasoning and triggering the communicative action. On the other hand, however, there are reasons to allow for a weaker notion of intention than this, according to which communication is an intentional affair, after all. Communicative action is said to be intentional in this weaker sense to the extent that it is subject to a double mechanism of control, with respect both to present-directed and future-directed intentions.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Meyer, A. S., & Huettig, F. (Eds.). (2016). Speaking and Listening: Relationships Between Language Production and Comprehension [Special Issue]. Journal of Memory and Language, 89.
  • Micklos, A. (2016). Interaction for facilitating conventionalization: Negotiating the silent gesture communication of noun-verb pairs. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/143.html.

    Abstract

    This study demonstrates how interaction – specifically negotiation and repair – facilitates the emergence, evolution, and conventionalization of a silent gesture communication system. In a modified iterated learning paradigm, partners communicated noun-verb meanings using only silent gesture. The need to disambiguate similar noun-verb pairs drove these "new" language users to develop a morphology that allowed for quicker processing, easier transmission, and improved accuracy. The specific morphological system that emerged came about through a process of negotiation within the dyad, namely by means of repair. By applying a discourse analytic approach to the use of repair in an experimental methodology for language evolution, we are able to determine not only if interaction facilitates the emergence and learnability of a new communication system, but also how interaction affects such a system
  • Mitterer, H. (2011). Social accountability influences phonetic alignment. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2442.

    Abstract

    Speakers tend to take over the articulatory habits of their interlocutors [e.g., Pardo, JASA (2006)]. This phonetic alignment could be the consequence of either a social mechanism or a direct and automatic link between speech perception and production. The latter assumes that social variables should have little influence on phonetic alignment. To test that participants were engaged in a "cloze task" (i.e., Stimulus: "In fantasy movies, silver bullets are used to kill ..." Response: "werewolves") with either one or four interlocutors. Given findings with the Asch-conformity paradigm in social psychology, multiple consistent speakers should exert a stronger force on the participant to align. To control the speech style of the interlocutors, their questions and answers were pre-recorded in either a formal or a casual speech style. The stimuli's speech style was then manipulated between participants and was consistent throughout the experiment for a given participant. Surprisingly, participants aligned less with the speech style if there were multiple interlocutors. This may reflect a "diffusion of responsibility:" Participants may find it more important to align when they interact with only one person than with a larger group.
  • Mulder, K., Ten Bosch, L., & Boves, L. (2016). Comparing different methods for analyzing ERP signals. In Proceedings of Interspeech 2016: The 17th Annual Conference of the International Speech Communication Association (pp. 1373-1377). doi:10.21437/Interspeech.2016-967.
  • Munro, R., Bethard, S., Kuperman, V., Lai, V. T., Melnick, R., Potts, C., Schnoebelen, T., & Tily, H. (2010). Crowdsourcing and language studies: The new generation of linguistic data. In Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Proceedings of the Workshop (pp. 122-130). Stroudsburg, PA: Association for Computational Linguistics.
  • Nordhoff, S., & Hammarström, H. (2011). Glottolog/Langdoc: Defining dialects, languages, and language families as collections of resources. Proceedings of the First International Workshop on Linked Science 2011 (LISC2011), Bonn, Germany, October 24, 2011.

    Abstract

    This paper describes the Glottolog/Langdoc project, an at- tempt to provide near-total bibliographical coverage of descriptive re- sources to the world's languages. Every reference is treated as a resource, as is every \languoid"[1]. References are linked to the languoids which they describe, and languoids are linked to the references described by them. Family relations between languoids are modeled in SKOS, as are relations across dierent classications of the same languages. This setup allows the representation of languoids as collections of references, render- ing the question of the denition of entities like `Scots', `West-Germanic' or `Indo-European' more empirical.
  • Ortega, G., & Ozyurek, A. (2016). Generalisable patterns of gesture distinguish semantic categories in communication without language. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1182-1187). Austin, TX: Cognitive Science Society.

    Abstract

    There is a long-standing assumption that gestural forms are geared by a set of modes of representation (acting, representing, drawing, moulding) with each technique expressing speakers’ focus of attention on specific aspects of referents (Müller, 2013). Beyond different taxonomies describing the modes of representation, it remains unclear what factors motivate certain depicting techniques over others. Results from a pantomime generation task show that pantomimes are not entirely idiosyncratic but rather follow generalisable patterns constrained by their semantic category. We show that a) specific modes of representations are preferred for certain objects (acting for manipulable objects and drawing for non-manipulable objects); and b) that use and ordering of deictics and modes of representation operate in tandem to distinguish between semantically related concepts (e.g., “to drink” vs “mug”). This study provides yet more evidence that our ability to communicate through silent gesture reveals systematic ways to describe events and objects around us
  • Otake, T., McQueen, J. M., & Cutler, A. (2010). Competition in the perception of spoken Japanese words. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 114-117).

    Abstract

    Japanese listeners detected Japanese words embedded at the end of nonsense sequences (e.g., kaba 'hippopotamus' in gyachikaba). When the final portion of the preceding context together with the initial portion of the word (e.g., here, the sequence chika) was compatible with many lexical competitors, recognition of the embedded word was more difficult than when such a sequence was compatible with few competitors. This clear effect of competition, established here for preceding context in Japanese, joins similar demonstrations, in other languages and for following contexts, to underline that the functional architecture of the human spoken-word recognition system is a universal one.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A. (2010). The role of iconic gestures in production and comprehension of language: Evidence from brain and behavior. In S. Kopp, & I. Wachsmuth (Eds.), Gesture in embodied communication and human-computer interaction: 8th International Gesture Workshop, GW 2009, Bielefeld, Germany, February 25-27 2009. Revised selected papers (pp. 1-10). Berlin: Springer.
  • Pallier, C., Cutler, A., & Sebastian-Galles, N. (1997). Prosodic structure and phonetic processing: A cross-linguistic study. In Proceedings of EUROSPEECH 97 (pp. 2131-2134). Grenoble, France: ESCA.

    Abstract

    Dutch and Spanish differ in how predictable the stress pattern is as a function of the segmental content: it is correlated with syllable weight in Dutch but not in Spanish. In the present study, two experiments were run to compare the abilities of Dutch and Spanish speakers to separately process segmental and stress information. It was predicted that the Spanish speakers would have more difficulty focusing on the segments and ignoring the stress pattern than the Dutch speakers. The task was a speeded classification task on CVCV syllables, with blocks of trials in which the stress pattern could vary versus blocks in which it was fixed. First, we found interference due to stress variability in both languages, suggesting that the processing of segmental information cannot be performed independently of stress. Second, the effect was larger for Spanish than for Dutch, suggesting that that the degree of interference from stress variation may be partially mitigated by the predictability of stress placement in the language.
  • Peeters, D. (2016). Processing consequences of onomatopoeic iconicity in spoken language comprehension. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1632-1647). Austin, TX: Cognitive Science Society.

    Abstract

    Iconicity is a fundamental feature of human language. However its processing consequences at the behavioral and neural level in spoken word comprehension are not well understood. The current paper presents the behavioral and electrophysiological outcome of an auditory lexical decision task in which native speakers of Dutch listened to onomatopoeic words and matched control words while their electroencephalogram was recorded. Behaviorally, onomatopoeic words were processed as quickly and accurately as words with an arbitrary mapping between form and meaning. Event-related potentials time-locked to word onset revealed a significant decrease in negative amplitude in the N2 and N400 components and a late positivity for onomatopoeic words in comparison to the control words. These findings advance our understanding of the temporal dynamics of iconic form-meaning mapping in spoken word comprehension and suggest interplay between the neural representations of real-world sounds and spoken words.
  • Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2011). Does space structure spatial language? Linguistic encoding of space in sign languages. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 1595-1600). Austin, TX: Cognitive Science Society.
  • Poellmann, K., McQueen, J. M., & Mitterer, H. (2011). The time course of perceptual learning. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 1618-1621). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Two groups of participants were trained to perceive an ambiguous sound [s/f] as either /s/ or /f/ based on lexical bias: One group heard the ambiguous fricative in /s/-final words, the other in /f/-final words. This kind of exposure leads to a recalibration of the /s/-/f/ contrast [e.g., 4]. In order to investigate when and how this recalibration emerges, test trials were interspersed among training and filler trials. The learning effect needed at least 10 clear training items to arise. Its emergence seemed to occur in a rather step-wise fashion. Learning did not improve much after it first appeared. It is likely, however, that the early test trials attracted participants' attention and therefore may have interfered with the learning process.
  • Raviv, L., & Arnon, I. (2016). The developmental trajectory of children's statistical learning abilities. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Austin, TX: Cognitive Science Society (pp. 1469-1474). Austin, TX: Cognitive Science Society.

    Abstract

    Infants, children and adults are capable of implicitly extracting regularities from their environment through statistical learning (SL). SL is present from early infancy and found across tasks and modalities, raising questions about the domain generality of SL. However, little is known about its’ developmental trajectory: Is SL fully developed capacity in infancy, or does it improve with age, like other cognitive skills? While SL is well established in infants and adults, only few studies have looked at SL across development with conflicting results: some find age-related improvements while others do not. Importantly, despite its postulated role in language learning, no study has examined the developmental trajectory of auditory SL throughout childhood. Here, we conduct a large-scale study of children's auditory SL across a wide age-range (5-12y, N=115). Results show that auditory SL does not change much across development. We discuss implications for modality-based differences in SL and for its role in language acquisition.
  • Raviv, L., & Arnon, I. (2016). Language evolution in the lab: The case of child learners. In A. Papagrafou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Austin, TX: Cognitive Science Society (pp. 1643-1648). Austin, TX: Cognitive Science Society.

    Abstract

    Recent work suggests that cultural transmission can lead to the emergence of linguistic structure as speakers’ weak individual biases become amplified through iterated learning. However, to date, no published study has demonstrated a similar emergence of linguistic structure in children. This gap is problematic given that languages are mainly learned by children and that adults may bring existing linguistic biases to the task. Here, we conduct a large-scale study of iterated language learning in both children and adults, using a novel, child-friendly paradigm. The results show that while children make more mistakes overall, their languages become more learnable and show learnability biases similar to those of adults. Child languages did not show a significant increase in linguistic structure over time, but consistent mappings between meanings and signals did emerge on many occasions, as found with adults. This provides the first demonstration that cultural transmission affects the languages children and adults produce similarly.
  • Regier, T., Khetarpal, N., & Majid, A. (2011). Inferring conceptual structure from cross-language data. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 1488). Austin, TX: Cognitive Science Society.
  • Reinisch, E., & Weber, A. (2011). Adapting to lexical stress in a foreign accent. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 1678-1681). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    An exposure-test paradigm was used to examine whether Dutch listeners can adapt their perception to non-canonical marking of lexical stress in Hungarian-accented Dutch. During exposure, one group of listeners heard only words with correct initial stress, while another group also heard examples of unstressed initial syllables that were marked by high pitch, a possible stress cue in Dutch. Subsequently, listeners’ eye movements to target-competitor pairs with segmental overlap but different stress patterns were tracked while hearing Hungarian-accented Dutch. Listeners who had heard non-canonically produced words previously distinguished target-competitor pairs faster than listeners who had only been exposed to canonical forms before. This suggests that listeners can adapt quickly to speaker-specific realizations of non-canonical lexical stress.
  • Reinisch, E., Weber, A., & Mitterer, H. (2011). Listeners retune phoneme boundaries across languages [Abstract]. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2572-2572.

    Abstract

    Listeners can flexibly retune category boundaries of their native language to adapt to non-canonically produced phonemes. This only occurs, however, if the pronunciation peculiarities can be attributed to stable and not transient speaker-specific characteristics. Listening to someone speaking a second language, listeners could attribute non-canonical pronunciations either to the speaker or to the fact that she is modifying her categories in the second language. We investigated whether, following exposure to Dutch-accented English, Dutch listeners show effects of category retuning during test where they hear the same speaker speaking her native language, Dutch. Exposure was a lexical-decision task where either word-final [f] or [s] was replaced by an ambiguous sound. At test listeners categorized minimal word pairs ending in sounds along an [f]-[s] continuum. Following exposure to English words, Dutch listeners showed boundary shifts of a similar magnitude as following exposure to the same phoneme variants in their native language. This suggests that production patterns in a second language are deemed a stable characteristic. A second experiment suggests that category retuning also occurs when listeners are exposed to and tested with a native speaker of their second language. Listeners thus retune phoneme boundaries across languages.
  • Reinisch, E., Jesse, A., & Nygaard, L. C. (2010). Tone of voice helps learning the meaning of novel adjectives [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 114). York: University of York.

    Abstract

    To understand spoken words listeners have to cope with seemingly meaningless variability in the speech signal. Speakers vary, for example, their tone of voice (ToV) by changing speaking rate, pitch, vocal effort, and loudness. This variation is independent of "linguistic prosody" such as sentence intonation or speech rhythm. The variation due to ToV, however, is not random. Speakers use, for example, higher pitch when referring to small objects than when referring to large objects and importantly, adult listeners are able to use these non-lexical ToV cues to distinguish between the meanings of antonym pairs (e.g., big-small; Nygaard, Herold, & Namy, 2009). In the present study, we asked whether listeners infer the meaning of novel adjectives from ToV and subsequently interpret these adjectives according to the learned meaning even in the absence of ToV. Moreover, if listeners actually acquire these adjectival meanings, then they should generalize these word meanings to novel referents. ToV would thus be a semantic cue to lexical acquisition. This hypothesis was tested in an exposure-test paradigm with adult listeners. In the experiment listeners' eye movements to picture pairs were monitored. The picture pairs represented the endpoints of the adjectival dimensions big-small, hot-cold, and strong-weak (e.g., an elephant and an ant represented big-small). Four picture pairs per category were used. While viewing the pictures participants listened to lexically unconstraining sentences containing novel adjectives, for example, "Can you find the foppick one?" During exposure, the sentences were spoken in infant-directed speech with the intended adjectival meaning expressed by ToV. Word-meaning pairings were counterbalanced across participants. Each word was repeated eight times. Listeners had no explicit task. To guide listeners' attention to the relation between the words and pictures, three sets of filler trials were included that contained real English adjectives (e.g., full-empty). In the subsequent test phase participants heard the novel adjectives in neutral adult-directed ToV. Test sentences were recorded before the speaker was informed about intended word meanings. Participants had to choose which of two pictures on the screen the speaker referred to. Picture pairs that were presented during the exposure phase and four new picture pairs per category that varied along the critical dimensions were tested. During exposure listeners did not spontaneously direct their gaze to the intended referent at the first presentation. But as indicated by listener's fixation behavior, they quickly learned the relationship between ToV and word meaning over only two exposures. Importantly, during test participants consistently identified the intended referent object even in the absence of informative ToV. Learning was found for all three tested categories and did not depend on whether the picture pairs had been presented during exposure. Listeners thus use ToV not only to distinguish between antonym pairs but they are able to extract word meaning from ToV and assign this meaning to novel words. The newly learned word meanings can then be generalized to novel referents even in the absence of ToV cues. These findings suggest that ToV can be used as a semantic cue to lexical acquisition. References Nygaard, L. C., Herold, D. S., & Namy, L. L. (2009) The semantics of prosody: Acoustic and perceptual evidence of prosodic correlates to word meaning. Cognitive Science, 33. 127-146.
  • Reis, A., Faísca, L., Castro, S.-L., & Petersson, K. M. (2010). Preditores da leitura ao longo da escolaridade: Um estudo com alunos do 1 ciclo do ensino básico. In Actas do VII simpósio nacional de investigação em psicologia (pp. 3117-3132).

    Abstract

    A aquisição da leitura decorre ao longo de diversas etapas, desde o momento em que a criança inicia o contacto com o alfabeto até ao momento em que se torna um leitor competente, apto a ler correcta e fluentemente. Compreender a evolução desta competência através de uma análise da diferenciação do peso de variáveis preditoras da leitura possibilita teorizar sobre os mecanismos cognitivos envolvidos nas diferentes fases de desenvolvimento da leitura. Realizámos um estudo transversal com 568 alunos do segundo ao quarto ano do primeiro ciclo do Ensino Básico, em que se avaliou o impacto de capacidades de processamento fonológico, nomeação rápida, conhecimento letra-som e vocabulário, bem como de capacidades cognitivas mais gerais (inteligência não-verbal e memória de trabalho), na exactidão e velocidade da leitura. De uma forma geral, os resultados mostraram que, apesar da consciência fonológica permanecer como o preditor mais importante da exactidão e fluência da leitura, o seu peso decresce à medida que a escolaridade aumenta. Observou-se também que, à medida que o contributo da consciência fonológica para a explicação da velocidade de leitura diminuía, aumentava o contributo de outras variáveis mais associadas ao automatismo e reconhecimento lexical, tais como a nomeação rápida e o vocabulário. Em suma, podemos dizer que ao longo da escolaridade se observa uma alteração dinâmica dos processos cognitivos subjacentes à leitura, o que sugere que a criança evolui de uma estratégia de leitura ancorada em processamentos sub-lexicais, e como tal mais dependente de processamentos fonológicos, para uma estratégia baseada no reconhecimento ortográfico das palavras.
  • Rodd, J., & Chen, A. (2016). Pitch accents show a perceptual magnet effect: Evidence of internal structure in intonation categories. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 697-701).

    Abstract

    The question of whether intonation events have a categorical mental representation has long been a puzzle in prosodic research, and one that experiments testing production and perception across category boundaries have failed to definitively resolve. This paper takes the alternative approach of looking for evidence of structure within a postulated category by testing for a Perceptual Magnet Effect (PME). PME has been found in boundary tones but has not previously been conclusively found in pitch accents. In this investigation, perceived goodness and discriminability of re-synthesised Dutch nuclear rise contours (L*H H%) were evaluated by naive native speakers of Dutch. The variation between these stimuli was quantified using a polynomial-parametric modelling approach (i.e. the SOCoPaSul model) in place of the traditional approach whereby excursion size, peak alignment and pitch register are used independently of each other to quantify variation between pitch accents. Using this approach to calculate the acoustic-perceptual distance between different stimuli, PME was detected: (1) rated goodness, decreased as acoustic-perceptual distance relative to the prototype increased, and (2) equally spaced items far from the prototype were less frequently generalised than equally spaced items in the neighbourhood of the prototype. These results support the concept of categorically distinct intonation events.

    Additional information

    Link to Speech Prosody Website
  • Romberg, A., Zhang, Y., Newman, B., Triesch, J., & Yu, C. (2016). Global and local statistical regularities control visual attention to object sequences. In Proceedings of the 2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) (pp. 262-267).

    Abstract

    Many previous studies have shown that both infants and adults are skilled statistical learners. Because statistical learning is affected by attention, learners' ability to manage their attention can play a large role in what they learn. However, it is still unclear how learners allocate their attention in order to gain information in a visual environment containing multiple objects, especially how prior visual experience (i.e., familiarly of objects) influences where people look. To answer these questions, we collected eye movement data from adults exploring multiple novel objects while manipulating object familiarity with global (frequencies) and local (repetitions) regularities. We found that participants are sensitive to both global and local statistics embedded in their visual environment and they dynamically shift their attention to prioritize some objects over others as they gain knowledge of the objects and their distributions within the task.
  • Rossi, G. (2010). Interactive written discourse: Pragmatic aspects of SMS communication. In G. Garzone, P. Catenaccio, & C. Degano (Eds.), Diachronic perspectives on genres in specialized communication. Conference Proceedings (pp. 135-138). Milano: CUEM.
  • De Ruiter, J. P. (2004). On the primacy of language in multimodal communication. In Workshop Proceedings on Multimodal Corpora: Models of Human Behaviour for the Specification and Evaluation of Multimodal Input and Output Interfaces.(LREC2004) (pp. 38-41). Paris: ELRA - European Language Resources Association (CD-ROM).

    Abstract

    In this paper, I will argue that although the study of multimodal interaction offers exciting new prospects for Human Computer Interaction and human-human communication research, language is the primary form of communication, even in multimodal systems. I will support this claim with theoretical and empirical arguments, mainly drawn from human-human communication research, and will discuss the implications for multimodal communication research and Human-Computer Interaction.
  • Sadakata, M., Van der Zanden, L., & Sekiyama, K. (2010). Influence of musical training on perception of L2 speech. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 118-121).

    Abstract

    The current study reports specific cases in which a positive transfer of perceptual ability from the music domain to the language domain occurs. We tested whether musical training enhances discrimination and identification performance of L2 speech sounds (timing features, nasal consonants and vowels). Native Dutch and Japanese speakers with different musical training experience, matched for their estimated verbal IQ, participated in the experiments. Results indicated that musical training strongly increases one’s ability to perceive timing information in speech signals. We also found a benefit of musical training on discrimination performance for a subset of the tested vowel contrasts.
  • Sadakata, M., & McQueen, J. M. (2011). The role of variability in non-native perceptual learning of a Japanese geminate-singleton fricative contrast. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 873-876).

    Abstract

    The current study reports the enhancing effect of a high variability training procedure in the learning of a Japanese geminate-singleton fricative contrast. Dutch natives took part in a five-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. They heard either many repetitions of a limited set of words recorded by a single speaker (simple training) or fewer repetitions of a more variable set of words recorded by multiple speakers (variable training). Pre-post identification evaluations and a transfer test indicated clear benefits of the variable training.
  • Sauermann, A., Höhle, B., Chen, A., & Järvikivi, J. (2011). Intonational marking of focus in different word orders in German children. In M. B. Washburn, K. McKinney-Bock, E. Varis, & A. Sawyer (Eds.), Proceedings of the 28th West Coast Conference on Formal Linguistics (pp. 313-322). Somerville, MA: Cascadilla Proceedings Project.

    Abstract

    The use of word order and intonation to mark focus in child speech has received some attention. However, past work usually examined each device separately or only compared the realizations of focused vs. non-focused constituents. This paper investigates the interaction between word order and intonation in the marking of different focus types in 4- to 5-year old German-speaking children and an adult control group. An answer-reconstruction task was used to elicit syntactic (word order) and intonational focus marking of subject and objects (locus of focus) in three focus types (broad, narrow, and contrastive focus). The results indicate that both children and adults used intonation to distinguish broad from contrastive focus but they differed in the marking of narrow focus. Further, both groups preferred intonation to word order as device for focus marking. But children showed an early sensitivity for the impact of focus type and focus location on word order variation and on phonetic means to mark focus.
  • Sauter, D., Scott, S., & Calder, A. (2004). Categorisation of vocally expressed positive emotion: A first step towards basic positive emotions? [Abstract]. Proceedings of the British Psychological Society, 12, 111.

    Abstract

    Most of the study of basic emotion expressions has focused on facial expressions and little work has been done to specifically investigate happiness, the only positive of the basic emotions (Ekman & Friesen, 1971). However, a theoretical suggestion has been made that happiness could be broken down into discrete positive emotions, which each fulfil the criteria of basic emotions, and that these would be expressed vocally (Ekman, 1992). To empirically test this hypothesis, 20 participants categorised 80 paralinguistic sounds using the labels achievement, amusement, contentment, pleasure and relief. The results suggest that achievement, amusement and relief are perceived as distinct categories, which subjects accurately identify. In contrast, the categories of contentment and pleasure were systematically confused with other responses, although performance was still well above chance levels. These findings are initial evidence that the positive emotions engage distinct vocal expressions and may be considered to be distinct emotion categories.
  • Sauter, D. (2010). Non-verbal emotional vocalizations across cultures [Abstract]. In E. Zimmermann, & E. Altenmüller (Eds.), Evolution of emotional communication: From sounds in nonhuman mammals to speech and music in man (pp. 15). Hannover: University of Veterinary Medicine Hannover.

    Abstract

    Despite differences in language, culture, and ecology, some human characteristics are similar in people all over the world, while other features vary from one group to the next. These similarities and differences can inform arguments about what aspects of the human mind are part of our shared biological heritage and which are predominantly products of culture and language. I will present data from a cross-cultural project investigating the recognition of non-verbal vocalizations of emotions, such as screams and laughs, across two highly different cultural groups. English participants were compared to individuals from remote, culturally isolated Namibian villages. Vocalizations communicating the so-called “basic emotions” (anger, disgust, fear, joy, sadness, and surprise) were bidirectionally recognised. In contrast, a set of additional positive emotions was only recognised within, but not across, cultural boundaries. These results indicate that a number of primarily negative emotions are associated with vocalizations that can be recognised across cultures, while at least some positive emotions are communicated with culture-specific signals. I will discuss these findings in the context of accounts of emotions at differing levels of analysis, with an emphasis on the often-neglected positive emotions.
  • Sauter, D., Crasborn, O., & Haun, D. B. M. (2010). The role of perceptual learning in emotional vocalizations [Abstract]. In C. Douilliez, & C. Humez (Eds.), Third European Conference on Emotion 2010. Proceedings (pp. 39-39). Lille: Université de Lille.

    Abstract

    Many studies suggest that emotional signals can be recognized across cultures and modalities. But to what extent are these signals innate and to what extent are they learned? This study investigated whether auditory learning is necessary for the production of recognizable emotional vocalizations by examining the vocalizations produced by people born deaf. Recordings were made of eight congenitally deaf Dutch individuals, who produced non-verbal vocalizations of a range of negative and positive emotions. Perception was examined in a forced-choice task with hearing Dutch listeners (n = 25). Considerable variability was found across emotions, suggesting that auditory learning is more important for the acquisition of certain types of vocalizations than for others. In particular, achievement and surprise sounds were relatively poorly recognized. In contrast, amusement and disgust vocalizations were well recognized, suggesting that for some emotions, recognizable vocalizations can develop without any auditory learning. The implications of these results for models of emotional communication are discussed, and other routes of social learning available to the deaf individuals are considered.
  • Sauter, D., Crasborn, O., & Haun, D. B. M. (2010). The role of perceptual learning in emotional vocalizations [Abstract]. Journal of the Acoustical Society of America, 128, 2476.

    Abstract

    Vocalizations like screams and laughs are used to communicate affective states, but what acoustic cues in these signals require vocal learning and which ones are innate? This study investigated the role of auditory learning in the production of non-verbal emotional vocalizations by examining the vocalizations produced by people born deaf. Recordings were made of congenitally deaf Dutch individuals and matched hearing controls, who produced non-verbal vocalizations of a range of negative and positive emotions. Perception was examined in a forced-choice task with hearing Dutch listeners (n = 25), and judgments were analyzed together with acoustic cues, including envelope, pitch, and spectral measures. Considerable variability was found across emotions and acoustic cues, and the two types of information were related for a sub-set of the emotion categories. These results suggest that auditory learning is less important for the acquisition of certain types of vocalizations than for others (particularly amusement and relief), and they also point to a less central role for auditory learning of some acoustic features in affective non-verbal vocalizations. The implications of these results for models of vocal emotional communication are discussed.
  • Scharenborg, O., Boves, L., & Ten Bosch, L. (2004). ‘On-line early recognition’ of polysyllabic words in continuous speech. In S. Cassidy, F. Cox, R. Mannell, & P. Sallyanne (Eds.), Proceedings of the Tenth Australian International Conference on Speech Science & Technology (pp. 387-392). Canberra: Australian Speech Science and Technology Association Inc.

    Abstract

    In this paper, we investigate the ability of SpeM, our recognition system based on the combination of an automatic phone recogniser and a wordsearch module, to determine as early as possible during the word recognition process whether a word is likely to be recognised correctly (this we refer to as ‘on-line’ early word recognition). We present two measures that can be used to predict whether a word is correctly recognised: the Bayesian word activation and the amount of available (acoustic) information for a word. SpeM was tested on 1,463 polysyllabic words in 885 continuous speech utterances. The investigated predictors indicated that a word activation that is 1) high (but not too high) and 2) based on more phones is more reliable to predict the correctness of a word than a similarly high value based on a small number of phones or a lower value of the word activation.
  • Scharenborg, O., Mitterer, H., & McQueen, J. M. (2011). Perceptual learning of liquids. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 149-152).

    Abstract

    Previous research on lexically-guided perceptual learning has focussed on contrasts that differ primarily in local cues, such as plosive and fricative contrasts. The present research had two aims: to investigate whether perceptual learning occurs for a contrast with non-local cues, the /l/-/r/ contrast, and to establish whether STRAIGHT can be used to create ambiguous sounds on an /l/-/r/ continuum. Listening experiments showed lexically-guided learning about the /l/-/r/ contrast. Listeners can thus tune in to unusual speech sounds characterised by non-local cues. Moreover, STRAIGHT can be used to create stimuli for perceptual learning experiments, opening up new research possibilities. Index Terms: perceptual learning, morphing, liquids, human word recognition, STRAIGHT.

Share this page