Publications

Displaying 201 - 218 of 218
  • Skiba, R. (1989). Funktionale Beschreibung von Lernervarietäten: Das Berliner Projekt P-MoLL. In N. Reiter (Ed.), Sprechen und Hören: Akte des 23. Linguistischen Kolloquiums, Berlin (pp. 181-191). Tübingen: Niemeyer.
  • Slobin, D. I. (2002). Cognitive and communicative consequences of linguistic diversity. In S. Strömqvist (Ed.), The diversity of languages and language learning (pp. 7-23). Lund, Sweden: Lund University, Centre for Languages and Literature.
  • De Smedt, K., & Kempen, G. (1991). Segment Grammar: A formalism for incremental sentence generation. In C. Paris, W. Swartout, & W. Mann (Eds.), Natural language generation and computational linguistics (pp. 329-349). Dordrecht: Kluwer Academic Publishers.

    Abstract

    Incremental sentence generation imposes special constraints on the representation of the grammar and the design of the formulator (the module which is responsible for constructing the syntactic and morphological structure). In the model of natural speech production presented here, a formalism called Segment Grammar is used for the representation of linguistic knowledge. We give a definition of this formalism and present a formulator design which relies on it. Next, we present an object- oriented implementation of Segment Grammar. Finally, we compare Segment Grammar with other formalisms.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2016). Complex word recognition behaviour emerges from the richness of the word learning environment. In K. Twomey, A. C. Smith, G. Westermann, & P. Monaghan (Eds.), Neurocomputational Models of Cognitive Development and Processing: Proceedings of the 14th Neural Computation and Psychology Workshop (pp. 99-114). Singapore: World Scientific. doi:10.1142/9789814699341_0007.

    Abstract

    Computational models can reflect the complexity of human behaviour by implementing multiple constraints within their architecture, and/or by taking into account the variety and richness of the environment to which the human is responding. We explore the second alternative in a model of word recognition that learns to map spoken words to visual and semantic representations of the words’ concepts. Critically, we employ a phonological representation utilising coarse-coding of the auditory stream, to mimic early stages of language development that are not dependent on individual phonemes to be isolated in the input, which may be a consequence of literacy development. The model was tested at different stages during training, and was able to simulate key behavioural features of word recognition in children: a developing effect of semantic information as a consequence of language learning, and a small but earlier effect of phonological information on word processing. We additionally tested the role of visual information in word processing, generating predictions for behavioural studies, showing that visual information could have a larger effect than semantics on children’s performance, but that again this affects recognition later in word processing than phonological information. The model also provides further predictions for performance of a mature word recognition system in the absence of fine-coding of phonology, such as in adults who have low literacy skills. The model demonstrated that such phonological effects may be reduced but are still evident even when multiple distractors from various modalities are present in the listener’s environment. The model demonstrates that complexity in word recognition can emerge from a simple associative system responding to the interactions between multiple sources of information in the language learner’s environment.
  • Stivers, T. (2004). Question sequences in interaction. In A. Majid (Ed.), Field Manual Volume 9 (pp. 45-47). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506967.

    Abstract

    When people request information, they have a variety of means for eliciting the information. In English two of the primary resources for eliciting information include asking questions, making statements about their interlocutor (thereby generating confirmation or revision). But within these types there are a variety of ways that these information elicitors can be designed. The goal of this task is to examine how different languages seek and provide information, the extent to which syntax vs prosodic resources are used (e.g., in questions), and the extent to which the design of information seeking actions and their responses display a structural preference to promote social solidarity.
  • Sumer, B., & Ozyurek, A. (2016). İşitme Engelli Çocukların Dil Edinimi [Sign language acquisition by deaf children]. In C. Aydin, T. Goksun, A. Kuntay, & D. Tahiroglu (Eds.), Aklın Çocuk Hali: Zihin Gelişimi Araştırmaları [Research on Cognitive Development] (pp. 365-388). Istanbul: Koc University Press.
  • Sumer, B. (2016). Scene-setting and reference introduction in sign and spoken languages: What does modality tell us? In B. Haznedar, & F. N. Ketrez (Eds.), The acquisition of Turkish in childhood (pp. 193-220). Amsterdam: Benjamins.

    Abstract

    Previous studies show that children do not become adult-like in learning to set the scene and introduce referents in their narrations until 9 years of age and even beyond. However, they investigated spoken languages, thus we do not know much about how these skills are acquired in sign languages, where events are expressed in visually similar ways to the real world events, unlike in spoken languages. The results of the current study demonstrate that deaf children (3;5–9;10 years) acquiring Turkish Sign Language, and hearing children (3;8–9;11 years) acquiring spoken Turkish both acquire scene-setting and referent introduction skills at similar ages. Thus the modality of the language being acquired does not have facilitating or hindering effects in the development of these skills.
  • Sumer, B., Zwitserlood, I., Perniss, P., & Ozyurek, A. (2016). Yer Bildiren İfadelerin Türkçe ve Türk İşaret Dili’nde (TİD) Çocuklar Tarafından Edinimi [The acqusition of spatial relations by children in Turkish and Turkish Sign Language (TID)]. In E. Arik (Ed.), Ellerle Konuşmak: Türk İşaret Dili Araştırmaları [Speaking with hands: Studies on Turkish Sign Language] (pp. 157-182). Istanbul: Koç University Press.
  • Terrill, A. (2004). Coordination in Lavukaleve. In M. Haspelmath (Ed.), Coordinating Constructions. (pp. 427-443). Amsterdam: John Benjamins.
  • Trujillo, J. P., Levinson, S. C., & Holler, J. (2021). Visual information in computer-mediated interaction matters: Investigating the association between the availability of gesture and turn transition timing in conversation. In M. Kurosu (Ed.), Human-Computer Interaction. Design and User Experience Case Studies. HCII 2021 (pp. 643-657). Cham: Springer. doi:10.1007/978-3-030-78468-3_44.

    Abstract

    Natural human interaction involves the fast-paced exchange of speaker turns. Crucially, if a next speaker waited with planning their turn until the current speaker was finished, language production models would predict much longer turn transition times than what we observe. Next speakers must therefore prepare their turn in parallel to listening. Visual signals likely play a role in this process, for example by helping the next speaker to process the ongoing utterance and thus prepare an appropriately-timed response.

    To understand how visual signals contribute to the timing of turn-taking, and to move beyond the mostly qualitative studies of gesture in conversation, we examined unconstrained, computer-mediated conversations between 20 pairs of participants while systematically manipulating speaker visibility. Using motion tracking and manual gesture annotation, we assessed 1) how visibility affected the timing of turn transitions, and 2) whether use of co-speech gestures and 3) the communicative kinematic features of these gestures were associated with changes in turn transition timing.

    We found that 1) decreased visibility was associated with less tightly timed turn transitions, and 2) the presence of gestures was associated with more tightly timed turn transitions across visibility conditions. Finally, 3) structural and salient kinematics contributed to gesture’s facilitatory effect on turn transition times.

    Our findings suggest that speaker visibility--and especially the presence and kinematic form of gestures--during conversation contributes to the temporal coordination of conversational turns in computer-mediated settings. Furthermore, our study demonstrates that it is possible to use naturalistic conversation and still obtain controlled results.
  • Van Valin Jr., R. D. (2016). An overview of information structure in three Amazonian languages. In M. Fernandez-Vest, & R. D. Van Valin Jr. (Eds.), Information structure and spoken language from a cross-linguistic perspective (pp. 77-92). Berlin: Mouton de Gruyter.
  • Van Wijk, C., & Kempen, G. (1982). Kost zinsbouw echt tijd? In R. Stuip, & W. Zwanenberg (Eds.), Handelingen van het zevenendertigste Nederlands Filologencongres (pp. 223-231). Amsterdam: APA-Holland University Press.
  • Van Berkum, J. J. A. (2004). Sentence comprehension in a wider discourse: Can we use ERPs to keep track of things? In M. Carreiras, Jr., & C. Clifton (Eds.), The on-line study of sentence comprehension: eyetracking, ERPs and beyond (pp. 229-270). New York: Psychology Press.
  • Vernes, S. C., Janik, V. M., Fitch, W. T., & Slater, P. J. B. (Eds.). (2021). Vocal learning in animals and humans [Special Issue]. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376.
  • Von Stutterheim, C., & Klein, W. (2004). Die Gesetze des Geistes sind metrisch: Hölderlin und die Sprachproduktion. In H. Schwarz (Ed.), Fenster zur Welt: Deutsch als Fremdsprachenphilologie (pp. 439-460). München: Iudicium.
  • Von Stutterheim, C., & Klein, W. (1989). Referential movement in descriptive and narrative discourse. In R. Dietrich, & C. F. Graumann (Eds.), Language processing in social context (pp. 39-76). Amsterdam: Elsevier.
  • Wittenburg, P., Broeder, D., Offenga, F., & Willems, D. (2002). Metadata set and tools for multimedia/multimodal language resources. In M. Maybury (Ed.), Proceedings of the 3rd International Conference on Language Resources and Evaluation (LREC 2002). Workshop on Multimodel Resources and Multimodel Systems Evaluation. (pp. 9-13). Paris: European Language Resources Association.
  • Zwitserlood, I. (2002). Klassifikatoren in der Niederländischen Gebärdensprache (NGT). In H. Leuniger, & K. Wempe (Eds.), Gebärdensprachlinguistik 2000. Theorie und Anwendung. Vorträge vom Symposium "Gebärdensprachforschung im deutschsprachigem Raum", Frankfurt a.M., 11.-13. Juni 1999 (pp. 113-126). Hamburg: Signum Verlag.

Share this page