Publications

Displaying 901 - 1000 of 1225
  • Rossano, F. (2004). Per una semiotica dell'interazione: Analisi del rapporto tra sguardo, corpo e parola in alcune interazione faccia a faccia. Master Thesis, Università di Bologna, Bologna, Italy.
  • Rowland, C. F., & Kidd, E. (2019). Key issues and future directions: How do children acquire language? In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 181-185). Cambridge, MA: MIT Press.
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2005). The incidence of error in young children's wh-questions. Journal of Speech, Language, and Hearing Research, 48, 384-404. doi:10.1044/1092-4388(2005/027).

    Abstract

    Many current generativist theorists suggest that young children possess the grammatical principles of inversion required for question formation but make errors because they find it difficult to learn language-specific rules about how inversion applies. The present study analyzed longitudinal spontaneous sampled data from twelve 2–3-year-old English speaking children and the intensive diary data of 1 child (age 2;7 [years;months] to 2;11) in order to test some of these theories. The results indicated significantly different rates of error use across different auxiliaries. In particular, error rates differed across 2 forms of the same auxiliary subtype (e.g., auxiliary is vs. are), and auxiliary DO and modal auxiliaries attracted significantly higher rates of errors of inversion than other auxiliaries. The authors concluded that current generativist theories might have problems explaining the patterning of errors seen in children's questions, which might be more consistent with a constructivist account of development. However, constructivists need to devise more precise predictions in order to fully explain the acquisition of questions.
  • Rubio-Fernández, P. (2019). Memory and inferential processes in false-belief tasks: An investigation of the unexpected-contents paradigm. Journal of Experimental Child Psychology, 177, 297-312. doi:10.1016/j.jecp.2018.08.011.

    Abstract

    This study investigated the extent to which 3- and 4-year-old children may rely on associative memory representations to pass an unexpected-contents false-belief task. In Experiment 1, 4-year-olds performed at chance in both a standard Smarties task and a modified version highlighting the secrecy of the contents of the tube. These results were interpreted as evidence that having to infer the answer to a false-belief question (without relying on memory representations) is generally difficult for preschool children. In Experiments 2a, 2b, and 2c, 3-year-olds were tested at 3-month intervals during their first year of preschool and showed better performance in a narrative version of the Smarties task (chance level) than in the standard version (below-chance level). These children performed even better in an associative version of the narrative task (above-chance level) where they could form a memory representation associating the protagonist with the expected contents of a box. The results of a true-belief control suggest that some of these children may have relied on their memory of the protagonist’s preference for the original contents of the box (rather than their understanding of what the protagonist was expecting to find inside). This suggests that when 3-year-olds passed the associative unexpected-contents task, some may have been keeping track of the protagonist’s initial preference and not only (or not necessarily) of the protagonist’s false belief. These results are interpreted in the light of current accounts of Theory of Mind development and failed replications of verbal false-belief tasks.
  • Rubio-Fernández, P. (2019). Publication standards in infancy research: Three ways to make Violation-of-Expectation studies more reliable. Infant Behavior and Development, 54, 177-188. doi:10.1016/j.infbeh.2018.09.009.

    Abstract

    The Violation-of-Expectation paradigm is a widespread paradigm in infancy research that relies on looking time as an index of surprise. This methodological review aims to increase the reliability of future VoE studies by proposing to standardize reporting practices in this literature. I review 15 VoE studies on false-belief reasoning, which used a variety of experimental parameters. An analysis of the distribution of p-values across experiments suggests an absence of p-hacking. However, there are potential concerns with the accuracy of their measures of infants’ attention, as well as with the lack of a consensus on the parameters that should be used to set up VoE studies. I propose that (i) future VoE studies ought to report not only looking times (as a measure of attention) but also looking-away times (as an equally important measure of distraction); (ii) VoE studies must offer theoretical justification for the parameters they use, and (iii) when parameters are selected through piloting, pilot data must be reported in order to understand how parameters were selected. Future VoE studies ought to maximize the accuracy of their measures of infants’ attention since the reliability of their results and the validity of their conclusions both depend on the accuracy of their measures.
  • Rubio-Fernández, P., Mollica, F., Oraa Ali, M., & Gibson, E. (2019). How do you know that? Automatic belief inferences in passing conversation. Cognition, 193: 104011. doi:10.1016/j.cognition.2019.104011.

    Abstract

    There is an ongoing debate, both in philosophy and psychology, as to whether people are able to automatically infer what others may know, or whether they can only derive belief inferences by deploying cognitive resources. Evidence from laboratory tasks, often involving false beliefs or visual-perspective taking, has suggested that belief inferences are cognitively costly, controlled processes. Here we suggest that in everyday conversation, belief reasoning is pervasive and therefore potentially automatic in some cases. To test this hypothesis, we conducted two pre-registered self-paced reading experiments (N1 = 91, N2 = 89). The results of these experiments showed that participants slowed down when a stranger commented ‘That greasy food is bad for your ulcer’ relative to conditions where a stranger commented on their own ulcer or a friend made either comment – none of which violated participants’ common-ground expectations. We conclude that Theory of Mind models need to account for belief reasoning in conversation as it is at the center of everyday social interaction
  • Rubio-Fernández, P. (2019). Overinformative Speakers Are Cooperative: Revisiting the Gricean Maxim of Quantity. Cognitive Science, 43: e12797. doi:10.1111/cogs.12797.

    Abstract

    A pragmatic account of referential communication is developed which presents an alternative to traditional Gricean accounts by focusing on cooperativeness and efficiency, rather than informativity. The results of four language-production experiments support the view that speakers can be cooperative when producing redundant adjectives, doing so more often when color modification could facilitate the listener's search for the referent in the visual display (Experiment 1a). By contrast, when the listener knew which shape was the target, speakers did not produce redundant color adjectives (Experiment 1b). English speakers used redundant color adjectives more often than Spanish speakers, suggesting that speakers are sensitive to the differential efficiency of prenominal and postnominal modification (Experiment 2). Speakers were also cooperative when using redundant size adjectives (Experiment 3). Overall, these results show how discriminability affects a speaker's choice of referential expression above and beyond considerations of informativity, supporting the view that redundant speakers can be cooperative.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • Rubio-Fernández, P. (2019). Theory of mind. In C. Cummins, & N. Katsos (Eds.), The Handbook of Experimental Semantics and Pragmatics (pp. 524-536). Oxford: Oxford University Press.
  • De Ruiter, J. P. (2007). Some multimodal signals in humans. In I. Van de Sluis, M. Theune, E. Reiter, & E. Krahmer (Eds.), Proceedings of the Workshop on Multimodal Output Generation (MOG 2007) (pp. 141-148).

    Abstract

    In this paper, I will give an overview of some well-studied multimodal signals that humans produce while they communicate with other humans, and discuss the implications of those studies for HCI. I will first discuss a conceptual framework that allows us to distinguish between functional and sensory modalities. This distinction is important, as there are multiple functional modalities using the same sensory modality (e.g., facial expression and eye-gaze in the visual modality). A second theoretically important issue is redundancy. Some signals appear to be redundant with a signal in another modality, whereas others give new information or even appear to give conflicting information (see e.g., the work of Susan Goldin-Meadows on speech accompanying gestures). I will argue that multimodal signals are never truly redundant. First, many gestures that appear at first sight to express the same meaning as the accompanying speech generally provide extra (analog) information about manner, path, etc. Second, the simple fact that the same information is expressed in more than one modality is itself a communicative signal. Armed with this conceptual background, I will then proceed to give an overview of some multimodalsignals that have been investigated in human-human research, and the level of understanding we have of the meaning of those signals. The latter issue is especially important for potential implementations of these signals in artificial agents. First, I will discuss pointing gestures. I will address the issue of the timing of pointing gestures relative to the speech it is supposed to support, the mutual dependency between pointing gestures and speech, and discuss the existence of alternative ways of pointing from other cultures. The most frequent form of pointing that does not involve the index finger is a cultural practice called lip-pointing which employs two visual functional modalities, mouth-shape and eye-gaze, simultaneously for pointing. Next, I will address the issue of eye-gaze. A classical study by Kendon (1967) claims that there is a systematic relationship between eye-gaze (at the interlocutor) and turn-taking states. Research at our institute has shown that this relationship is weaker than has often been assumed. If the dialogue setting contains a visible object that is relevant to the dialogue (e.g., a map), the rate of eye-gaze-at-other drops dramatically and its relationship to turn taking disappears completely. The implications for machine generated eye-gaze are discussed. Finally, I will explore a theoretical debate regarding spontaneous gestures. It has often been claimed that the class of gestures that is called iconic by McNeill (1992) are a “window into the mind”. That is, they are claimed to give the researcher (or even the interlocutor) a direct view into the speaker’s thought, without being obscured by the complex transformation that take place when transforming a thought into a verbal utterance. I will argue that this is an illusion. Gestures can be shown to be specifically designed such that the listener can be expected to interpret them. Although the transformations carried out to express a thought in gesture are indeed (partly) different from the corresponding transformations for speech, they are a) complex, and b) severely understudied. This obviously has consequences both for the gesture research agenda, and for the generation of iconic gestures by machines.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • De Ruiter, J. P. (2004). On the primacy of language in multimodal communication. In Workshop Proceedings on Multimodal Corpora: Models of Human Behaviour for the Specification and Evaluation of Multimodal Input and Output Interfaces.(LREC2004) (pp. 38-41). Paris: ELRA - European Language Resources Association (CD-ROM).

    Abstract

    In this paper, I will argue that although the study of multimodal interaction offers exciting new prospects for Human Computer Interaction and human-human communication research, language is the primary form of communication, even in multimodal systems. I will support this claim with theoretical and empirical arguments, mainly drawn from human-human communication research, and will discuss the implications for multimodal communication research and Human-Computer Interaction.
  • De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Hagoort, P., & Toni, I. (2007). On the origins of intentions. In P. Haggard, Y. Rossetti, & M. Kawato (Eds.), Sensorimotor foundations of higher cognition (pp. 593-610). Oxford: Oxford University Press.
  • De Ruiter, J. P., & Enfield, N. J. (2007). The BIC model: A blueprint for the communicator. In C. Stephanidis (Ed.), Universal access in Human-Computer Interaction: Applications and services (pp. 251-258). Berlin: Springer.
  • De Ruiter, J. P. (2004). Response systems and signals of recipiency. In A. Majid (Ed.), Field Manual Volume 9 (pp. 53-55). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506961.

    Abstract

    Listeners’ signals of recipiency, such as “Mm-hm” or “uh-huh” in English, are the most elementary or minimal “conversational turns” possible. Minimal, because apart from acknowledging recipiency and inviting the speaker to continue with his/her next turn, they do not add any new information to the discourse of the conversation. The goal of this project is to gather cross cultural information on listeners’ feedback behaviour during conversation. Listeners in a conversation usually provide short signals that indicate to the speaker that they are still “with the speaker”. These signals could be verbal (like for instance “mm hm” in English or “hm hm” in Dutch) or nonverbal (visual), like nodding. Often, these signals are produced in overlap with the speaker’s vocalisation. If listeners do not produce these signals, speakers often invite them explicitly (e.g. “are you still there?” in a telephone conversation). Our goal is to investigate what kind of signals are used by listeners of different languages to signal “recipiency” to the speaker.
  • Russel, A., & Trilsbeek, P. (2004). ELAN Audio Playback. Language Archive Newsletter, 1(4), 12-13.
  • Russel, A., & Wittenburg, P. (2004). ELAN Native Media Handling. Language Archive Newsletter, 1(3), 12-12.
  • Sach, M., Seitz, R. J., & Indefrey, P. (2004). Unified inflectional processing of regular and irregular verbs: A PET study. NeuroReport, 15(3), 533-537. doi:10.1097/01.wnr.0000113529.32218.92.

    Abstract

    Psycholinguistic theories propose different models of inflectional processing of regular and irregular verbs: dual mechanism models assume separate modules with lexical frequency sensitivity for irregular verbs. In contradistinction, connectionist models propose a unified process in a single module.We conducted a PET study using a 2 x 2 design with verb regularity and frequency.We found significantly shorter voice onset times for regular verbs and high frequency verbs irrespective of regularity. The PET data showed activations in inferior frontal gyrus (BA 45), nucleus lentiformis, thalamus, and superior medial cerebellum for both regular and irregular verbs but no dissociation for verb regularity.Our results support common processing components for regular and irregular verb inflection.
  • Sakarias, M., & Flecken, M. (2019). Keeping the result in sight and mind: General cognitive principles and language-specific influences in the perception and memory of resultative events. Cognitive Science, 43(1), 1-30. doi:10.1111/cogs.12708.

    Abstract

    We study how people attend to and memorize endings of events that differ in the degree to which objects in them are affected by an action: Resultative events show objects that undergo a visually salient change in state during the course of the event (peeling a potato), and non‐resultative events involve objects that undergo no, or only partial state change (stirring in a pan). We investigate general cognitive principles, and potential language‐specific influences, in verbal and nonverbal event encoding and memory, across two experiments with Dutch and Estonian participants. Estonian marks a viewer's perspective on an event's result obligatorily via grammatical case on direct object nouns: Objects undergoing a partial/full change in state in an event are marked with partitive/accusative case, respectively. Therefore, we hypothesized increased saliency of object states and event results in Estonian speakers, as compared to speakers of Dutch. Findings show (a) a general cognitive principle of attending carefully to endings of resultative events, implying cognitive saliency of object states in event processing; (b) a language‐specific boost on attention and memory of event results under verbal task demands in Estonian speakers. Results are discussed in relation to theories of event cognition, linguistic relativity, and thinking for speaking.
  • Salverda, A. P. (2005). Prosodically-conditioned detail in the recognition of spoken words. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.57311.

    Abstract

    The research presented in this dissertation examined the influence of prosodically-conditioned detail on the recognition of spoken words. The main finding is that subphonemic information in the speech signal that is conditioned by constituent-level prosodic structure can affect lexical processing systematically. It was shown that such information, as indicated by and estimated from the lengthening of speech sounds in the vicinity of prosodic boundaries, can help listeners to distinguish onset-embedded words (e.g. 'ham') from longer words that have this word embedded at their onset (e.g. 'hamster'). Furthermore, it was shown that variation in the realization of a spoken word that is associated with its position in the prosodic structure of an utterance can effect lexical processing. The pattern of competitor activation associated with the recognition of a monosyllabic spoken word in utterance-final position, where the realization of the word is strongly affected by the utterance boundary, is different from that associated with the recognition of the same word in utterance-medial position, where the realization of the word is less strongly affected by the following prosodic-word boundary. Taken together, the findings attest to the extraordinary sensitivity of the spoken-word recogntion system by demonstrating the relevance for lexical processing of very fine-grained phonetic detail conditioned by prosodic structure.

    Additional information

    full text via Radboud Repository
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • Satizabal, C. L., Adams, H. H. H., Hibar, D. P., White, C. C., Knol, M. J., Stein, J. L., Scholz, M., Sargurupremraj, M., Jahanshad, N., Roshchupkin, G. V., Smith, A. V., Bis, J. C., Jian, X., Luciano, M., Hofer, E., Teumer, A., Van der Lee, S. J., Yang, J., Yanek, L. R., Lee, T. V. and 271 moreSatizabal, C. L., Adams, H. H. H., Hibar, D. P., White, C. C., Knol, M. J., Stein, J. L., Scholz, M., Sargurupremraj, M., Jahanshad, N., Roshchupkin, G. V., Smith, A. V., Bis, J. C., Jian, X., Luciano, M., Hofer, E., Teumer, A., Van der Lee, S. J., Yang, J., Yanek, L. R., Lee, T. V., Li, S., Hu, Y., Koh, J. Y., Eicher, J. D., Desrivières, S., Arias-Vasquez, A., Chauhan, G., Athanasiu, L., Renteria, M. E., Kim, S., Höhn, D., Armstrong, N. J., Chen, Q., Holmes, A. J., Den Braber, A., Kloszewska, I., Andersson, M., Espeseth, T., Grimm, O., Abramovic, L., Alhusaini, S., Milaneschi, Y., Papmeyer, M., Axelsson, T., Ehrlich, S., Roiz-Santiañez, R., Kraemer, B., Håberg, A. K., Jones, H. J., Pike, G. B., Stein, D. J., Stevens, A., Bralten, J., Vernooij, M. W., Harris, T. B., Filippi, I., Witte, A. V., Guadalupe, T., Wittfeld, K., Mosley, T. H., Becker, J. T., Doan, N. T., Hagenaars, S. P., Saba, Y., Cuellar-Partida, G., Amin, N., Hilal, S., Nho, K., Karbalai, N., Arfanakis, K., Becker, D. M., Ames, D., Goldman, A. L., Lee, P. H., Boomsma, D. I., Lovestone, S., Giddaluru, S., Le Hellard, S., Mattheisen, M., Bohlken, M. M., Kasperaviciute, D., Schmaal, L., Lawrie, S. M., Agartz, I., Walton, E., Tordesillas-Gutierrez, D., Davies, G. E., Shin, J., Ipser, J. C., Vinke, L. N., Hoogman, M., Jia, T., Burkhardt, R., Klein, M., Crivello, F., Janowitz, D., Carmichael, O., Haukvik, U. K., Aribisala, B. S., Schmidt, H., Strike, L. T., Cheng, C.-Y., Risacher, S. L., Pütz, B., Fleischman, D. A., Assareh, A. A., Mattay, V. S., Buckner, R. L., Mecocci, P., Dale, A. M., Cichon, S., Boks, M. P., Matarin, M., Penninx, B. W. J. H., Calhoun, V. D., Chakravarty, M. M., Marquand, A., Macare, C., Masouleh, S. K., Oosterlaan, J., Amouyel, P., Hegenscheid, K., Rotter, J. I., Schork, A. J., Liewald, D. C. M., De Zubicaray, G. I., Wong, T. Y., Shen, L., Sämann, P. G., Brodaty, H., Roffman, J. L., De Geus, E. J. C., Tsolaki, M., Erk, S., Van Eijk, K. R., Cavalleri, G. L., Van der Wee, N. J. A., McIntosh, A. M., Gollub, R. L., Bulayeva, K. B., Bernard, M., Richards, J. S., Himali, J. J., Loeffler, M., Rommelse, N., Hoffmann, W., Westlye, L. T., Valdés Hernández, M. C., Hansell, N. K., Van Erp, T. G. M., Wolf, C., Kwok, J. B. J., Vellas, B., Heinz, A., Olde Loohuis, L. M., Delanty, N., Ho, B.-C., Ching, C. R. K., Shumskaya, E., Singh, B., Hofman, A., Van der Meer, D., Homuth, G., Psaty, B. M., Bastin, M., Montgomery, G. W., Foroud, T. M., Reppermund, S., Hottenga, J.-J., Simmons, A., Meyer-Lindenberg, A., Cahn, W., Whelan, C. D., Van Donkelaar, M. M. J., Yang, Q., Hosten, N., Green, R. C., Thalamuthu, A., Mohnke, S., Hulshoff Pol, H. E., Lin, H., Jack Jr., C. R., Schofield, P. R., Mühleisen, T. W., Maillard, P., Potkin, S. G., Wen, W., Fletcher, E., Toga, A. W., Gruber, O., Huentelman, M., Smith, G. D., Launer, L. J., Nyberg, L., Jönsson, E. G., Crespo-Facorro, B., Koen, N., Greve, D., Uitterlinden, A. G., Weinberger, D. R., Steen, V. M., Fedko, I. O., Groenewold, N. A., Niessen, W. J., Toro, R., Tzourio, C., Longstreth Jr., W. T., Ikram, M. K., Smoller, J. W., Van Tol, M.-J., Sussmann, J. E., Paus, T., Lemaître, H., Schroeter, M. L., Mazoyer, B., Andreassen, O. A., Holsboer, F., Depondt, C., Veltman, D. J., Turner, J. A., Pausova, Z., Schumann, G., Van Rooij, D., Djurovic, S., Deary, I. J., McMahon, K. L., Müller-Myhsok, B., Brouwer, R. M., Soininen, H., Pandolfo, M., Wassink, T. H., Cheung, J. W., Wolfers, T., Martinot, J.-L., Zwiers, M. P., Nauck, M., Melle, I., Martin, N. G., Kanai, R., Westman, E., Kahn, R. S., Sisodiya, S. M., White, T., Saremi, A., Van Bokhoven, H., Brunner, H. G., Völzke, H., Wright, M. J., Van 't Ent, D., Nöthen, M. M., Ophoff, R. A., Buitelaar, J. K., Fernández, G., Sachdev, P. S., Rietschel, M., Van Haren, N. E. M., Fisher, S. E., Beiser, A. S., Francks, C., Saykin, A. J., Mather, K. A., Romanczuk-Seiferth, N., Hartman, C. A., DeStefano, A. L., Heslenfeld, D. J., Weiner, M. W., Walter, H., Hoekstra, P. J., Nyquist, P. A., Franke, B., Bennett, D. A., Grabe, H. J., Johnson, A. D., Chen, C., Van Duijn, C. M., Lopez, O. L., Fornage, M., Wardlaw, J. A., Schmidt, R., DeCarli, C., De Jager, P. L., Villringer, A., Debette, S., Gudnason, V., Medland, S. E., Shulman, J. M., Thompson, P. M., Seshadri, S., & Ikram, M. A. (2019). Genetic architecture of subcortical brain structures in 38,854 individuals worldwide. Nature Genetics, 51, 1624-1636. doi:10.1038/s41588-019-0511-y.

    Abstract

    Subcortical brain structures are integral to motion, consciousness, emotions and learning. We identified common genetic variation related to the volumes of the nucleus accumbens, amygdala, brainstem, caudate nucleus, globus pallidus, putamen and thalamus, using genome-wide association analyses in almost 40,000 individuals from CHARGE, ENIGMA and UK Biobank. We show that variability in subcortical volumes is heritable, and identify 48 significantly associated loci (40 novel at the time of analysis). Annotation of these loci by utilizing gene expression, methylation and neuropathological data identified 199 genes putatively implicated in neurodevelopment, synaptic signaling, axonal transport, apoptosis, inflammation/infection and susceptibility to neurological disorders. This set of genes is significantly enriched for Drosophila orthologs associated with neurodevelopmental phenotypes, suggesting evolutionarily conserved mechanisms. Our findings uncover novel biology and potential drug targets underlying brain development and disease.
  • Sauter, D., Scott, S., & Calder, A. (2004). Categorisation of vocally expressed positive emotion: A first step towards basic positive emotions? [Abstract]. Proceedings of the British Psychological Society, 12, 111.

    Abstract

    Most of the study of basic emotion expressions has focused on facial expressions and little work has been done to specifically investigate happiness, the only positive of the basic emotions (Ekman & Friesen, 1971). However, a theoretical suggestion has been made that happiness could be broken down into discrete positive emotions, which each fulfil the criteria of basic emotions, and that these would be expressed vocally (Ekman, 1992). To empirically test this hypothesis, 20 participants categorised 80 paralinguistic sounds using the labels achievement, amusement, contentment, pleasure and relief. The results suggest that achievement, amusement and relief are perceived as distinct categories, which subjects accurately identify. In contrast, the categories of contentment and pleasure were systematically confused with other responses, although performance was still well above chance levels. These findings are initial evidence that the positive emotions engage distinct vocal expressions and may be considered to be distinct emotion categories.
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Sauter, D., Wiland, J., Warren, J., Eisner, F., Calder, A., & Scott, S. K. (2005). Sounds of joy: An investigation of vocal expressions of positive emotions [Abstract]. Journal of Cognitive Neuroscience, 61(Supplement), B99.

    Abstract

    A series of experiment tested Ekman’s (1992) hypothesis that there are a set of positive basic emotions that are expressed using vocal para-linguistic sounds, e.g. laughter and cheers. The proposed categories investigated were amusement, contentment, pleasure, relief and triumph. Behavioural testing using a forced-choice task indicated that participants were able to reliably recognize vocal expressions of the proposed emotions. A cross-cultural study in the preliterate Himba culture in Namibia confirmed that these categories are also recognized across cultures. A recognition test of acoustically manipulated emotional vocalizations established that the recognition of different emotions utilizes different vocal cues, and that these in turn differ from the cues used when comprehending speech. In a study using fMRI we found that relative to a signal correlated noise baseline, the paralinguistic expressions of emotion activated bilateral superior temporal gyri and sulci, lateral and anterior to primary auditory cortex, which is consistent with the processing of non linguistic vocal cues in the auditory ‘what’ pathway. Notably amusement was associated with greater activation extending into both temporal poles and amygdale and insular cortex. Overall, these results support the claim that ‘happiness’ can be fractionated into amusement, pleasure, relief and triumph.
  • Savoia, M., Cencioni, C., Mori, M., Atlante, S., Zaccagnini, G., Devanna, P., Di Marcotullio, L., Botta, B., Martelli, F., Zeiher, A. M., Pontecorvi, A., Farsetti, A., Spallotta, F., & Gaetano, C. (2019). P300/CBP-associated factor regulates transcription and function of isocitrate dehydrogenase 2 during muscle differentiation. The FASEB Journal, 33(3), 4107-4123. doi:10.1096/fj.201800788R.

    Abstract

    The epigenetic enzyme p300/CBP-associated factor (PCAF) belongs to the GCN5-related N-acetyltransferase (GNAT) family together with GCN5. Although its transcriptional and post-translational function is well characterized, little is known about its properties as regulator of cell metabolism. Here, we report the mitochondrial localization of PCAF conferred by an 85 aa mitochondrial targeting sequence (MTS) at the N-terminal region of the protein. In mitochondria, one of the PCAF targets is the isocitrate dehydrogenase 2 (IDH2) acetylated at lysine 180. This PCAF-regulated post-translational modification might reduce IDH2 affinity for isocitrate as a result of a conformational shift involving predictively the tyrosine at position 179. Site-directed mutagenesis and functional studies indicate that PCAF regulates IDH2, acting at dual level during myoblast differentiation: at a transcriptional level together with MyoD, and at a post-translational level by direct modification of lysine acetylation in mitochondria. The latter event determines a decrease in IDH2 function with negative consequences on muscle fiber formation in C2C12 cells. Indeed, a MTS-deprived PCAF does not localize into mitochondria, remains enriched into the nucleus, and contributes to a significant increase of muscle-specific gene expression enhancing muscle differentiation. The role of PCAF in mitochondria is a novel finding shedding light on metabolic processes relevant to early muscle precursor differentiation.—Savoia, M., Cencioni, C., Mori, M., Atlante, S., Zaccagnini, G., Devanna, P., Di Marcotullio, L., Botta, B., Martelli, F., Zeiher, A. M., Pontecorvi, A., Farsetti, A., Spallotta, F., Gaetano, C. P300/CBP-associated factor regulates transcription and function of isocitrate dehydrogenase 2 during muscle differentiation.

    Files private

    Request files
  • Scerri, T. S., Fisher, S. E., Francks, C., MacPhie, I. L., Paracchini, S., Richardson, A. J., Stein, J. F., & Monaco, A. P. (2004). Putative functional alleles of DYX1C1 are not associated with dyslexia susceptibility in a large sample of sibling pairs from the UK [Letter to JMG]. Journal of Medical Genetics, 41(11), 853-857. doi:10.1136/jmg.2004.018341.
  • Scharenborg, O., Ernestus, M., & Wan, V. (2007). Segmentation of speech: Child's play? In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1953-1956). Adelaide: Causal Productions.

    Abstract

    The difficulty of the task of segmenting a speech signal into its words is immediately clear when listening to a foreign language; it is much harder to segment the signal into its words, since the words of the language are unknown. Infants are faced with the same task when learning their first language. This study provides a better understanding of the task that infants face while learning their native language. We employed an automatic algorithm on the task of speech segmentation without prior knowledge of the labels of the phonemes. An analysis of the boundaries erroneously placed inside a phoneme showed that the algorithm consistently placed additional boundaries in phonemes in which acoustic changes occur. These acoustic changes may be as great as the transition from the closure to the burst of a plosive or as subtle as the formant transitions in low or back vowels. Moreover, we found that glottal vibration may attenuate the relevance of acoustic changes within obstruents. An interesting question for further research is how infants learn to overcome the natural tendency to segment these ‘dynamic’ phonemes.
  • Scharenborg, O., & Wan, V. (2007). Can unquantised articulatory feature continuums be modelled? In INTERSPEECH 2007 - 8th Annual Conference of the International Speech Communication Association (pp. 2473-2476). ISCA Archive.

    Abstract

    Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Although termed ‘articulatory’, previous definitions make certain assumptions that are invalid, for instance, that articulators ‘hop’ from one fixed position to the next. In this paper, we studied two methods, based on support vector classification (SVC) and regression (SVR), in which the articulation continuum is modelled without being restricted to using discrete AF value classes. A comparison with a baseline system trained on quantised values of the articulation continuum showed that both SVC and SVR outperform the baseline for two of the three investigated AFs, with improvements up to 5.6% absolute.
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., & Seneff, S. (2005). A two-pass strategy for handling OOVs in a large vocabulary recognition task. In Interspeech'2005 - Eurospeech, 9th European Conference on Speech Communication and Technology, (pp. 1669-1672). ISCA Archive.

    Abstract

    This paper addresses the issue of large-vocabulary recognition in a specific word class. We propose a two-pass strategy in which only major cities are explicitly represented in the first stage lexicon. An unknown word model encoded as a phone loop is used to detect OOV city names (referred to as rare city names). After which SpeM, a tool that can extract words and word-initial cohorts from phone graphs on the basis of a large fallback lexicon, provides an N-best list of promising city names on the basis of the phone sequences generated in the first stage. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances each containing one rare city name. We tested the size of the N-best list and three types of language models (LMs). The experiments showed that SpeM was able to include nearly 85% of the correct city names into an N-best list of 3000 city names when a unigram LM, which also boosted the unigram scores of a city name in a given state, was used.
  • Scharenborg, O., ten Bosch, L., & Boves, L. (2007). Early decision making in continuous speech. In M. Grimm, & K. Kroschel (Eds.), Robust speech recognition and understanding (pp. 333-350). I-Tech Education and Publishing.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O., Boves, L., & Ten Bosch, L. (2004). ‘On-line early recognition’ of polysyllabic words in continuous speech. In S. Cassidy, F. Cox, R. Mannell, & P. Sallyanne (Eds.), Proceedings of the Tenth Australian International Conference on Speech Science & Technology (pp. 387-392). Canberra: Australian Speech Science and Technology Association Inc.

    Abstract

    In this paper, we investigate the ability of SpeM, our recognition system based on the combination of an automatic phone recogniser and a wordsearch module, to determine as early as possible during the word recognition process whether a word is likely to be recognised correctly (this we refer to as ‘on-line’ early word recognition). We present two measures that can be used to predict whether a word is correctly recognised: the Bayesian word activation and the amount of available (acoustic) information for a word. SpeM was tested on 1,463 polysyllabic words in 885 continuous speech utterances. The investigated predictors indicated that a word activation that is 1) high (but not too high) and 2) based on more phones is more reliable to predict the correctness of a word than a similarly high value based on a small number of phones or a lower value of the word activation.
  • Scharenborg, O., Norris, D., Ten Bosch, L., & McQueen, J. M. (2005). How should a speech recognizer work? Cognitive Science, 29(6), 867-918. doi:10.1207/s15516709cog0000_37.

    Abstract

    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input.
  • Scharenborg, O. (2005). Narrowing the gap between automatic and human word recognition. PhD Thesis, [S.l.: s.n.].

    Abstract

    RU Radboud Universiteit Nijmegen, 16 september 2005
  • Scharenborg, O. (2005). Parallels between HSR and ASR: How ASR can contribute to HSR. In Interspeech'2005 - Eurospeech, 9th European Conference on Speech Communication and Technology (pp. 1237-1240). ISCA Archive.

    Abstract

    In this paper, we illustrate the close parallels between the research fields of human speech recognition (HSR) and automatic speech recognition (ASR) using a computational model of human word recognition, SpeM, which was built using techniques from ASR. We show that ASR has proven to be useful for improving models of HSR by relieving them of some of their shortcomings. However, in order to build an integrated computational model of all aspects of HSR, a lot of issues remain to be resolved. In this process, ASR algorithms and techniques definitely can play an important role.
  • Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.

    Abstract

    The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.
  • Scheu, O., & Zinn, C. (2007). How did the e-learning session go? The student inspector. In Proceedings of the 13th International Conference on Artificial Intelligence and Education (AIED 2007). Amsterdam: IOS Press.

    Abstract

    Good teachers know their students, and exploit this knowledge to adapt or optimise their instruction. Traditional teachers know their students because they interact with them face-to-face in classroom or one-to-one tutoring sessions. In these settings, they can build student models, i.e., by exploiting the multi-faceted nature of human-human communication. In distance-learning contexts, teacher and student have to cope with the lack of such direct interaction, and this must have detrimental effects for both teacher and student. In a past study we have analysed teacher requirements for tracking student actions in computer-mediated settings. Given the results of this study, we have devised and implemented a tool that allows teachers to keep track of their learners'interaction in e-learning systems. We present the tool's functionality and user interfaces, and an evaluation of its usability.
  • Schijven, D., Geuze, E., Vinkers, C. H., Pulit, S. L., Schür, R. R., Malgaz, M., Bekema, E., Medic, J., van der Kust, K. E., Veldink, J. H., Boks, M. P., Vermetten, E., & Luykx, J. J. (2019). Multivariate genome-wide analysis of stress-related quantitative phenotypes. European Neuropsychopharmacology, 29(12), 1354-1364. doi:10.1016/j.euroneuro.2019.09.012.

    Abstract

    Exposure to traumatic stress increases the odds of developing a broad range of psychiatric conditions. Genetic studies targeting multiple stress-related quantitative phenotypes may shed light on mechanisms underlying vulnerability to psychopathology in the aftermath of stressful events. We applied a multivariate genome-wide association study (GWAS) to a unique military cohort (N = 583) in which we measured biochemical and behavioral phenotypes. The availability of pre- and post-deployment measurements allowed to capture changes in these phenotypes in response to stress. For genome-wide significant loci, we performed functional annotation, phenome-wide analysis and quasi-replication in PTSD case-control GWASs. We discovered one genetic variant reaching genome-wide significant association, surviving permutation and sensitivity analyses (rs10100651, p = 9.9 × 10−9). Functional annotation prioritized the genes INTS8 and TP53INP1. A phenome-wide scan revealed a significant association of these same genes with sleeping problems, hypertension and subjective well-being. Finally, a targeted lookup revealed nominally significant association of rs10100651 in a PTSD case-control GWAS in the UK Biobank (p = 0.02). We provide comprehensive evidence from multiple resources hinting at a role of the highlighted genetic variant in the human stress response, marking the power of multivariate genome-wide analysis of quantitative measures in stress research. Future genetic and functional studies can target this locus to further assess its effects on stress mediation and its possible role in psychopathology or resilience.

    Files private

    Request files
  • Schiller, N. O., Fikkert, P., & Levelt, C. C. (2004). Stress priming in picture naming: An SOA study. Brain and Language, 90(1-3), 231-240. doi:10.1016/S0093-934X(03)00436-X.

    Abstract

    This study investigates whether or not the representation of lexical stress information can be primed during speech production. In four experiments, we attempted to prime the stress position of bisyllabic target nouns (picture names) having initial and final stress with auditory prime words having either the same or different stress as the target (e.g., WORtel–MOtor vs. koSTUUM–MOtor; capital letters indicate stressed syllables in prime–target pairs). Furthermore, half of the prime words were semantically related, the other half unrelated. Overall, picture names were not produced faster when the prime word had the same stress as the target than when the prime had different stress, i.e., there was no stress-priming effect in any experiment. This result would not be expected if stress were stored in the lexicon. However, targets with initial stress were responded to faster than final-stress targets. The reason for this effect was neither the quality of the pictures nor frequency of occurrence or voice-key characteristics. We hypothesize here that this stress effect is a genuine encoding effect, i.e., words with stress on the second syllable take longer to be encoded because their stress pattern is irregular with respect to the lexical distribution of bisyllabic stress patterns, even though it can be regular with respect to metrical stress rules in Dutch. The results of the experiments are discussed in the framework of models of phonological encoding.
  • Schiller, N. O., & De Ruiter, J. P. (2004). Some notes on priming, alignment, and self-monitoring [Commentary]. Behavioral and Brain Sciences, 27(2), 208-209. doi:10.1017/S0140525X0441005X.

    Abstract

    Any complete theory of speaking must take the dialogical function of language use into account. Pickering & Garrod (P&G) make some progress on this point. However, we question whether their interactive alignment model is the optimal approach. In this commentary, we specifically criticize (1) their notion of alignment being implemented through priming, and (2) their claim that self-monitoring can occur at all levels of linguistic representation.
  • Schiller, N. O. (2004). The onset effect in word naming. Journal of Memory and Language, 50(4), 477-490. doi:10.1016/j.jml.2004.02.004.

    Abstract

    This study investigates whether or not masked form priming effects in the naming task depend on the number of shared segments between prime and target. Dutch participants named bisyllabic words, which were preceded by visual masked primes. When primes shared the initial segment(s) with the target, naming latencies were shorter than in a control condition (string of percent signs). Onset complexity (singleton vs. complex word onset) did not modulate this priming effect in Dutch. Furthermore, significant priming due to shared final segments was only found when the prime did not contain a mismatching onset, suggesting an interfering role of initial non-target segments. It is concluded that (a) degree of overlap (segmental match vs. mismatch), and (b) position of overlap (initial vs. final) influence the magnitude of the form priming effect in the naming task. A modification of the segmental overlap hypothesis (Schiller, 1998) is proposed to account for the data.
  • Schiller, N. O. (2005). Verbal self-monitoring. In A. Cutler (Ed.), Twenty-first Century Psycholinguistics: Four cornerstones (pp. 245-261). Lawrence Erlbaum: Mahwah [etc.].
  • Schmiedtová, B. (2004). At the same time.. The expression of simultaneity in learner varieties. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.59569.
  • Schmiedtová, B. (2004). At the same time.. The expression of simultaneity in learner varieties. Berlin: Mouton de Gruyter.

    Abstract

    The study endeavors a detailed and systematic classification of linguistic simultaneity expressions. Further, it aims at a well described survey of how simultaneity is expressed by native speakers in their own language. On the basis of real production data the book answers the questions of how native speakers express temporal simultaneity in general, and how learners at different levels of proficiency deal with this situation under experimental test conditions. Furthermore, the results of this study shed new light on our understanding of aspect in general, and on its acquisition by adult learners.
  • Schmitt, B. M., Schiller, N. O., Rodriguez-Fornells, A., & Münte, T. F. (2004). Elektrophysiologische Studien zum Zeitverlauf von Sprachprozessen. In H. H. Müller, & G. Rickheit (Eds.), Neurokognition der Sprache (pp. 51-70). Tübingen: Stauffenburg.
  • Schoenmakers, G.-J., & De Swart, P. (2019). Adverbial hurdles in Dutch scrambling. In A. Gattnar, R. Hörnig, M. Störzer, & S. Featherston (Eds.), Proceedings of Linguistic Evidence 2018: Experimental Data Drives Linguistic Theory (pp. 124-145). Tübingen: University of Tübingen.

    Abstract

    This paper addresses the role of the adverb in Dutch direct object scrambling constructions. We report four experiments in which we investigate whether the structural position and the scope sensitivity of the adverb affect acceptability judgments of scrambling constructions and native speakers' tendency to scramble definite objects. We conclude that the type of adverb plays a key role in Dutch word ordering preferences.
  • Schoffelen, J.-M., Oostenveld, R., Lam, N. H. L., Udden, J., Hulten, A., & Hagoort, P. (2019). A 204-subject multimodal neuroimaging dataset to study language processing. Scientific Data, 6(1): 17. doi:10.1038/s41597-019-0020-y.

    Abstract

    This dataset, colloquially known as the Mother Of Unification Studies (MOUS) dataset, contains multimodal neuroimaging data that has been acquired from 204 healthy human subjects. The neuroimaging protocol consisted of magnetic resonance imaging (MRI) to derive information at high spatial resolution about brain anatomy and structural connections, and functional data during task, and at rest. In addition, magnetoencephalography (MEG) was used to obtain high temporal resolution electrophysiological measurements during task, and at rest. All subjects performed a language task, during which they processed linguistic utterances that either consisted of normal or scrambled sentences. Half of the subjects were reading the stimuli, the other half listened to the stimuli. The resting state measurements consisted of 5 minutes eyes-open for the MEG and 7 minutes eyes-closed for fMRI. The neuroimaging data, as well as the information about the experimental events are shared according to the Brain Imaging Data Structure (BIDS) format. This unprecedented neuroimaging language data collection allows for the investigation of various aspects of the neurobiological correlates of language.
  • Schoffelen, J.-M., Oostenveld, R., & Fries, P. (2005). Neuronal coherence as a mechanism of effective corticospinal interaction. Science, 308, 111-113. doi:10.1126/science.1107027.

    Abstract

    Neuronal groups can interact with each other even if they are widely separated. One group might modulate its firing rate or its internal oscillatory synchronization to influence another group. We propose that coherence between two neuronal groups is a mechanism of efficient interaction, because it renders mutual input optimally timed and thereby maximally effective. Modulations of subjects' readiness to respond in a simple reaction-time task were closely correlated with the strength of gamma-band (40 to 70 hertz) coherence between motor cortex and spinal cord neurons. This coherence may contribute to an effective corticospinal interaction and shortened reaction times.
  • Schoot, L., Hagoort, P., & Segaert, K. (2019). Stronger syntactic alignment in the presence of an interlocutor. Frontiers in Psychology, 10: 685. doi:10.3389/fpsyg.2019.00685.

    Abstract

    Speakers are influenced by the linguistic context: hearing one syntactic alternative leads to an increased chance that the speaker will repeat this structure in the subsequent utterance (i.e., syntactic priming, or structural persistence). Top-down influences, such as whether a conversation partner (or, interlocutor) is present, may modulate the degree to which syntactic priming occurs. In the current study, we indeed show that the magnitude of syntactic alignment increases when speakers are interacting with an interlocutor as opposed to doing the experiment alone. The structural persistence effect for passive sentences is stronger in the presence of an interlocutor than when no interlocutor is present (i.e., when the participant is primed by a recording). We did not find evidence, however, that a speaker’s syntactic priming magnitude is influenced by the degree of their conversation partner’s priming magnitude. Together, these results support a mediated account of syntactic priming, in which syntactic choices are not only affected by preceding linguistic input, but also by top-down influences, such as the speakers’ communicative intent.
  • Schubotz, L., Ozyurek, A., & Holler, J. (2019). Age-related differences in multimodal recipient design: Younger, but not older adults, adapt speech and co-speech gestures to common ground. Language, Cognition and Neuroscience, 34(2), 254-271. doi:10.1080/23273798.2018.1527377.

    Abstract

    Speakers can adapt their speech and co-speech gestures based on knowledge shared with an addressee (common ground-based recipient design). Here, we investigate whether these adaptations are modulated by the speaker’s age and cognitive abilities. Younger and older participants narrated six short comic stories to a same-aged addressee. Half of each story was known to both participants, the other half only to the speaker. The two age groups did not differ in terms of the number of words and narrative events mentioned per narration, or in terms of gesture frequency, gesture rate, or percentage of events expressed multimodally. However, only the younger participants reduced the amount of verbal and gestural information when narrating mutually known as opposed to novel story content. Age-related differences in cognitive abilities did not predict these differences in common ground-based recipient design. The older participants’ communicative behaviour may therefore also reflect differences in social or pragmatic goals.

    Additional information

    plcp_a_1527377_sm4510.pdf
  • Schuerman, W. L., McQueen, J. M., & Meyer, A. S. (2019). Speaker statistical averageness modulates word recognition in adverse listening conditions. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1203-1207). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    We tested whether statistical averageness (SA) at the level of the individual speaker could predict a speaker’s intelligibility. 28 female and 21 male speakers of Dutch were recorded producing 336 sentences,
    each containing two target nouns. Recordings were compared to those of all other same-sex speakers using dynamic time warping (DTW). For each sentence, the DTW distance constituted a metric
    of phonetic distance from one speaker to all other speakers. SA comprised the average of these distances. Later, the same participants performed a word recognition task on the target nouns in the same sentences, under three degraded listening conditions. In all three conditions, accuracy increased with SA. This held even when participants listened to their own utterances. These findings suggest that listeners process speech with respect to the statistical
    properties of the language spoken in their community, rather than using their own speech as a reference
  • Schuhmann, T., Kemmerer, S. K., Duecker, F., De Graaf, T. A., Ten Oever, S., Weerd, P. D., & Sack, A. T. (2019). Left parietal tACS at alpha frequency induces a shift of visuospatial attention. PLoS One, 14(11): e0217729. doi:10.1371/journal.pone.0217729.

    Abstract

    Background

    Voluntary shifts of visuospatial attention are associated with a lateralization of parieto-occipital alpha power (7-13Hz), i.e. higher power in the hemisphere ipsilateral and lower power contralateral to the locus of attention. Recent noninvasive neuromodulation studies demonstrated that alpha power can be experimentally increased using transcranial alternating current stimulation (tACS).
    Objective/Hypothesis

    We hypothesized that tACS at alpha frequency over the left parietal cortex induces shifts of attention to the left hemifield. However, spatial attention shifts not only occur voluntarily (endogenous/ top-down), but also stimulus-driven (exogenous/ bottom-up). To study the task-specificity of the potential effects of tACS on attentional processes, we administered three conceptually different spatial attention tasks.
    Methods

    36 healthy volunteers were recruited from an academic environment. In two separate sessions, we applied either high-density tACS at 10Hz, or sham tACS, for 35–40 minutes to their left parietal cortex. We systematically compared performance on endogenous attention, exogenous attention, and stimulus detection tasks.
    Results

    In the endogenous attention task, a greater leftward bias in reaction times was induced during left parietal 10Hz tACS as compared to sham. There were no stimulation effects in either the exogenous attention or the stimulus detection task.
    Conclusion

    The study demonstrates that high-density tACS at 10Hz can be used to modulate visuospatial attention performance. The tACS effect is task-specific, indicating that not all forms of attention are equally susceptible to the stimulation.

    Additional information

    relevant data
  • Schulte im Walde, S., Melinger, A., Roth, M., & Weber, A. (2007). An empirical characterization of response types in German association norms. In Proceedings of the GLDV workshop on lexical-semantic and ontological resources.
  • Schür, R. R., Schijven, D., Boks, M. P., Rutten, B. P., Stein, M. B., Veldink, J. H., Joëls, M., Geuze, E., Vermetten, E., Luykx, J. J., & Vinkers, C. H. (2019). The effect of genetic vulnerability and military deployment on the development of post-traumatic stress disorder and depressive symptoms. European Neuropsychopharmacology, 29(3), 405-415. doi:10.1016/j.euroneuro.2018.12.009.

    Abstract

    Exposure to trauma strongly increases the risk to develop stress-related psychopathology, such as post-traumatic stress disorder (PTSD) or major depressive disorder (MDD). In addition, liability to develop these moderately heritable disorders is partly determined by common genetic variance, which is starting to be uncovered by genome-wide association studies (GWASs). However, it is currently unknown to what extent genetic vulnerability and trauma interact. We investigated whether genetic risk based on summary statistics of large GWASs for PTSD and MDD predisposed individuals to report an increase in MDD and PTSD symptoms in a prospective military cohort (N = 516) at five time points after deployment to Afghanistan: one month, six months and one, two and five years. Linear regression was used to analyze the contribution of polygenic risk scores (PRSs, at multiple p-value thresholds) and their interaction with deployment-related trauma to the development of PTSD- and depression-related symptoms. We found no main effects of PRSs nor evidence for interactions with trauma on the development of PTSD or depressive symptoms at any of the time points in the five years after military deployment. Our results based on a unique long-term follow-up of a deployed military cohort suggest limited validity of current PTSD and MDD polygenic risk scores, albeit in the presence of minimal severe psychopathology in the target cohort. Even though the predictive value of PRSs will likely benefit from larger sample sizes in discovery and target datasets, progress will probably also depend on (endo)phenotype refinement that in turn will reduce etiological heterogeneity.
  • Schwichtenberg, B., & Schiller, N. O. (2004). Semantic gender assignment regularities in German. Brain and Language, 90(1-3), 326-337. doi:10.1016/S0093-934X(03)00445-0.

    Abstract

    Gender assignment relates to a native speaker's knowledge of the structure of the gender system of his/her language, allowing the speaker to select the appropriate gender for each noun. Whereas categorical assignment rules and exceptional gender assignment are well investigated, assignment regularities, i.e., tendencies in the gender distribution identified within the vocabulary of a language, are still controversial. The present study is an empirical contribution trying to shed light on the gender assignment system native German speakers have at their disposal. Participants presented with a category (e.g., predator) and a pair of gender-marked pseudowords (e.g., der Trelle vs. die Stisse) preferentially selected the pseudo-word preceded by the gender-marked determiner ‘‘associated’’ with the category (e.g., masculine). This finding suggests that semantic regularities might be part of the gender assignment system of native speakers.
  • Scott, D. R., & Cutler, A. (1984). Segmental phonology and the perception of syntactic structure. Journal of Verbal Learning and Verbal Behavior, 23, 450-466. Retrieved from http://www.sciencedirect.com/science//journal/00225371.

    Abstract

    Recent research in speech production has shown that syntactic structure is reflected in segmental phonology--the application of certain phonological rules of English (e.g., palatalization and alveolar flapping) is inhibited across phrase boundaries. We examined whether such segmental effects can be used in speech perception as cues to syntactic structure, and the relation between the use of these segmental features as syntactic markers in production and perception. Speakers of American English (a dialect in which the above segmental effects occur) could indeed use the segmental cues in syntax perception; speakers of British English (in which the effects do not occur) were unable to make use of them, while speakers of British English who were long-term residents of the United States showed intermediate performance.
  • Scott, S., & Sauter, D. (2004). Vocal expressions of emotion and positive and negative basic emotions [Abstract]. Proceedings of the British Psychological Society, 12, 156.

    Abstract

    Previous studies have indicated that vocal and facial expressions of the ‘basic’ emotions share aspects of processing. Thus amygdala damage compromises the perception of fear and anger from the face and from the voice. In the current study we tested the hypothesis that there exist positive basic emotions, expressed mainly in the voice (Ekman, 1992). Vocal stimuli were produced to express the specific positive emotions of amusement, achievement, pleasure, contentment and relief.
  • Scurry, A. N., Vercillo, T., Nicholson, A., Webster, M., & Jiang, F. (2019). Aging impairs temporal sensitivity, but not perceptual synchrony, across modalities. Multisensory Research, 32(8), 671-692. doi:10.1163/22134808-20191343.

    Abstract

    Encoding the temporal properties of external signals that comprise multimodal events is a major factor guiding everyday experience. However, during the natural aging process, impairments to sensory processing can profoundly affect multimodal temporal perception. Various mechanisms can contribute to temporal perception, and thus it is imperative to understand how each can be affected by age. In the current study, using three different temporal order judgement tasks (unisensory, multisensory, and sensorimotor), we investigated the effects of age on two separate temporal processes: synchronization and integration of multiple signals. These two processes rely on different aspects of temporal information, either the temporal alignment of processed signals or the integration/segregation of signals arising from different modalities, respectively. Results showed that the ability to integrate/segregate multiple signals decreased with age regardless of the task, and that the magnitude of such impairment correlated across tasks, suggesting a widespread mechanism affected by age. In contrast, perceptual synchrony remained stable with age, revealing a distinct intact mechanism. Overall, results from this study suggest that aging has differential effects on temporal processing, and general impairments with aging may impact global temporal sensitivity while context-dependent processes remain unaffected.

    Files private

    Request files
  • Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.

    Abstract

    We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12.
  • Seidlmayer, E., Galke, L., Melnychuk, T., Schultz, C., Tochtermann, K., & Förstner, K. U. (2019). Take it personally - A Python library for data enrichment for infometrical applications. In M. Alam, R. Usbeck, T. Pellegrini, H. Sack, & Y. Sure-Vetter (Eds.), Proceedings of the Posters and Demo Track of the 15th International Conference on Semantic Systems co-located with 15th International Conference on Semantic Systems (SEMANTiCS 2019).

    Abstract

    Like every other social sphere, science is influenced by individual characteristics of researchers. However, for investigations on scientific networks, only little data about the social background of researchers, e.g. social origin, gender, affiliation etc., is available.
    This paper introduces ”Take it personally - TIP”, a conceptual model and library currently under development, which aims to support the
    semantic enrichment of publication databases with semantically related background information which resides elsewhere in the (semantic) web, such as Wikidata.
    The supplementary information enriches the original information in the publication databases and thus facilitates the creation of complex scientific knowledge graphs. Such enrichment helps to improve the scientometric analysis of scientific publications as they can also take social backgrounds of researchers into account and to understand social structure in research communities.
  • Seifart, F. (2005). The structure and use of shape-based noun classes in Miraña (North West Amazon). PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.60378.

    Abstract

    Miraña, an endangered Witotoan language spoken in the Colombian Amazon region, has an inventory of over 60 noun class markers, most of which denote the shape of nominal referents. Class markers in this language are ubiquitous in their uses for derivational purposes in nouns and for agreement marking in virtually all other nominal expressions, such as pronouns, numerals, demonstratives, and relative clauses, as well as in verbs. This study provides a comprehensive analysis of this system by giving equal attention to its morphosyntactic, semantic, and discourse-pragmatic properties. The particular properties of this system raise issues in a number of ongoing theoretical discussions, in particular the typology of systems of nominal classification and the typology of reference tracking.

    Additional information

    full text via Radboud Repository
  • Seijdel, N., Sakmakidis, N., De Haan, E. H. F., Bohte, S. M., & Scholte, H. S. (2019). Implicit scene segmentation in deeper convolutional neural networks. In Proceedings of the 2019 Conference on Cognitive Computational Neuroscience (pp. 1059-1062). doi:10.32470/CCN.2019.1149-0.

    Abstract

    Feedforward deep convolutional neural networks (DCNNs) are matching and even surpassing human performance on object recognition. This performance suggests that activation of a loose collection of image
    features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Recent findings in humans however, suggest that while feedforward activity may suffice for
    sparse scenes with isolated objects, additional visual operations ('routines') that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to
    performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects
    and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicated less distinction between object- and background features for more shallow networks. For those networks, we observed a benefit of training on segmented objects (as compared to unsegmented objects). Overall, deeper networks trained on natural
    (unsegmented) scenes seem to perform implicit 'segmentation' of the objects from their background, possibly by improved selection of relevant features.
  • Senft, G. (2007). Reference and 'référence dangereuse' to persons in Kilivila: An overview and a case study. In N. Enfield, & T. Stivers (Eds.), Person reference in interaction: Linguistic, cultural, and social perspectives (pp. 309-337). Cambridge: Cambridge University Press.

    Abstract

    Based on the conversation analysts’ insights into the various forms of third person reference in English, this paper first presents the inventory of forms Kilivila, the Austronesian language of the Trobriand Islanders of Papua New Guinea, offers its speakers for making such references. To illustrate such references to third persons in talk-in-interaction in Kilivila, a case study on gossiping is presented in the second part of the paper. This case study shows that ambiguous anaphoric references to two first mentioned third persons turn out to not only exceed and even violate the frame of a clearly defined situational-intentional variety of Kilivila that is constituted by the genre “gossip”, but also that these references are extremely dangerous for speakers in the Trobriand Islanders’ society. I illustrate how this culturally dangerous situation escalates and how other participants of the group of gossiping men try to “repair” this violation of the frame of a culturally defined and metalinguistically labelled “way of speaking”. The paper ends with some general remarks on how the understanding of forms of person reference in a language is dependent on the culture specific context in which they are produced.
  • Senft, G. (2004). Sprache, Kognition und Konzepte des Raumes in verschiedenen Kulturen - Zum Problem der Interdependenz sprachlicher und mentaler Strukturen. In L. Jäger (Ed.), Medialität und Mentalität (pp. 163-176). Paderborn: Wilhelm Fink.
  • Senft, G. (2007). The Nijmegen space games: Studying the interrelationship between language, culture and cognition. In J. Wassmann, & K. Stockhaus (Eds.), Person, space and memory in the contemporary Pacific: Experiencing new worlds (pp. 224-244). New York: Berghahn Books.

    Abstract

    One of the central aims of the "Cognitive Anthropology Research Group" (since 1998 the "Department of Language and Cognition of the MPI for Psycholinguistics") is to research the relationship between language, culture and cognition and the conceptualization of space in various languages and cultures. Ever since its foundation in 1991 the group has been developing methods to elicit cross-culturally and cross-linguistically comparable data for this research project. After a brief summary of the central considerations that served as guidelines for the developing of these elicitation devices, this paper first presents a broad selection of the "space games" developed and used for data elicitation in the groups' various fieldsites so far. The paper then discusses the advantages and shortcomings of these data elicitation devices. Finally, it is argued that methodologists developing such devices find themselves in a position somewhere between Scylla and Charybdis - at least, if they take the requirement seriously that the elicited data should be comparable not only cross-culturally but also cross-linguistically.
  • Senft, G. (2004). What do we really know about serial verb constructions in Austronesian and Papuan languages? In I. Bril, & F. Ozanne-Rivierre (Eds.), Complex predicates in Oceanic languages (pp. 49-64). Berlin: Mouton de Gruyter.
  • Senft, G. (2004). Wosi tauwau topaisewa - songs about migrant workers from the Trobriand Islands. In A. Graumann (Ed.), Towards a dynamic theory of language. Festschrift for Wolfgang Wildgen on occasion of his 60th birthday (pp. 229-241). Bochum: Universitätsverlag Dr. N. Brockmeyer.
  • Senft, G. (1991). Bakavilisi Biga - we can 'turn' the language - or: What happens to English words in Kilivila language? In W. Bahner, J. Schildt, & D. Viehwegger (Eds.), Proceedings of the XIVth International Congress of Linguists (pp. 1743-1746). Berlin: Akademie Verlag.
  • Senft, G. (1991). [Review of the book Einführung in die deskriptive Linguistik by Michael Dürr and Peter Schlobinski]. Linguistics, 29, 722-725.
  • Senft, G. (1991). [Review of the book The sign languages of Aboriginal Australia by Adam Kendon]. Journal of Pragmatics, 15, 400-405. doi:10.1016/0378-2166(91)90040-5.
  • Senft, G. (Ed.). (2004). Deixis and Demonstratives in Oceanic Languages. Canberra: Pacific Linguistics.

    Abstract

    When we communicate, we communicate in a certain context, and this context shapes our utterances. Natural languages are context-bound and deixis 'concerns the ways in which languages encode or grammaticalise features of the context of utterance or speech event, and thus also concerns ways in which the interpretation of utterances depends on the analysis of that context of utterance' (Stephen Levinson). The systems of deixis and demonstratives in the Oceanic languages represented in the contributions to this volume illustrate the fascinating complexity of spatial reference in these languages. Some of the studies presented here highlight social aspects of deictic reference illustrating de Leon's point that 'reference is a collaborative task' . It is hoped that this anthology will contribute to a better understanding of this area and provoke further studies in this extremely interesting, though still rather underdeveloped, research area.
  • Senft, G. (2005). Bronislaw Malinowski and linguistic pragmatics. In P. Cap (Ed.), Pragmatics today (pp. 139-155). Frankfurt am Main: Lang.
  • Senft, G. (2004). Aspects of spatial deixis in Kilivila. In G. Senft (Ed.), Deixis and demonstratives in Oceanic languages (pp. 59-80). Canberra: Pacific Linguistics.
  • Senft, G. (2007). "Ich weiß nicht, was soll es bedeuten.." - Ethnolinguistische Winke zur Rolle von umfassenden Metadaten bei der (und für die) Arbeit mit Corpora. In W. Kallmeyer, & G. Zifonun (Eds.), Sprachkorpora - Datenmengen und Erkenntnisfortschritt (pp. 152-168). Berlin: Walter de Gruyter.

    Abstract

    Arbeitet man als muttersprachlicher Sprecher des Deutschen mit Corpora gesprochener oder geschriebener deutscher Sprache, dann reflektiert man in aller Regel nur selten über die Vielzahl von kulturspezifischen Informationen, die in solchen Texten kodifiziert sind – vor allem, wenn es sich bei diesen Daten um Texte aus der Gegenwart handelt. In den meisten Fällen hat man nämlich keinerlei Probleme mit dem in den Daten präsupponierten und als allgemein bekannt erachteten Hintergrundswissen. Betrachtet man dagegen Daten in Corpora, die andere – vor allem nicht-indoeuropäische – Sprachen dokumentieren, dann wird einem schnell bewußt, wieviel an kulturspezifischem Wissen nötig ist, um diese Daten adäquat zu verstehen. In meinem Vortrag illustriere ich diese Beobachtung an einem Beispiel aus meinem Corpus des Kilivila, der austronesischen Sprache der Trobriand-Insulaner von Papua-Neuguinea. Anhand eines kurzen Auschnitts einer insgesamt etwa 26 Minuten dauernden Dokumentation, worüber und wie sechs Trobriander miteinander tratschen und klatschen, zeige ich, was ein Hörer oder Leser eines solchen kurzen Daten-Ausschnitts wissen muß, um nicht nur dem Gespräch überhaupt folgen zu können, sondern auch um zu verstehen, was dabei abläuft und wieso ein auf den ersten Blick absolut alltägliches Gespräch plötzlich für einen Trobriander ungeheuer an Brisanz und Bedeutung gewinnt. Vor dem Hintergrund dieses Beispiels weise ich dann zum Schluß meines Beitrags darauf hin, wie unbedingt nötig und erforderlich es ist, in allen Corpora bei der Erschließung und Kommentierung von Datenmaterialien durch sogenannte Metadaten solche kulturspezifischen Informationen explizit zu machen.
  • Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
  • Senft, G. (2005). [Review of the book Malinowski: Odyssey of an anthropologist 1884-1920 by Michael Young]. Oceania, 75(3), 302-302.
  • Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.
  • Senft, G. (2004). [Review of the book Serial verbs in Oceanic: A descriptive typology by Terry Crowley]. Linguistics, 42(4), 855-859. doi:10.1515/ling.2004.028, 08/06/2004.
  • Senft, G. (2005). [Review of the book The art of Kula by Shirley F. Campbell]. Anthropos, 100, 247-249.
  • Senft, G. (2004). [Review of the book The Oceanic Languages by John Lynch, Malcolm Ross and Terry Crowley]. Linguistics, 42(2), 515-520. doi:10.1515/ling.2004.016.
  • Senft, G. (2004). Introduction. In G. Senft (Ed.), Deixis and demonstratives in Oceanic languages (pp. 1-13). Canberra: Pacific Linguistics.
  • Senft, G. (2007). Language, culture and cognition: Frames of spatial reference and why we need ontologies of space [Abstract]. In A. G. Cohn, C. Freksa, & B. Bebel (Eds.), Spatial cognition: Specialization and integration (pp. 12).

    Abstract

    One of the many results of the "Space" research project conducted at the MPI for Psycholinguistics is that there are three "Frames of spatial Reference" (FoRs), the relative, the intrinsic and the absolute FoR. Cross-linguistic research showed that speakers who prefer one FoR in verbal spatial references rely on a comparable coding system for memorizing spatial configurations and for making inferences with respect to these spatial configurations in non-verbal problem solving. Moreover, research results also revealed that in some languages these verbal FoRs also influence gestural behavior. These results document the close interrelationship between language, culture and cognition in the domain "Space". The proper description of these interrelationships in the spatial domain requires language and culture specific ontologies.
  • Senft, G. (2007). Nominal classification. In D. Geeraerts, & H. Cuyckens (Eds.), The Oxford handbook of cognitive linguistics (pp. 676-696). Oxford: Oxford University Press.

    Abstract

    This handbook chapter summarizes some of the problems of nominal classification in language, presents and illustrates the various systems or techniques of nominal classification, and points out why nominal classification is one of the most interesting topics in Cognitive Linguistics.
  • Senft, G. (1991). Mahnreden auf den Trobriand Inseln: Eine Fallstudie. In D. Flader (Ed.), Verbale Interaktion: Studien zur Empirie und Methologie der Pragmatik (pp. 27-49). Stuttgart: Metzler.
  • Senft, G. (1991). Network models to describe the Kilivila classifier system. Oceanic Linguistics, 30, 131-155. Retrieved from http://www.jstor.org/stable/3623085.
  • Senft, G. (2004). Participation and posture. In A. Majid (Ed.), Field Manual Volume 9 (pp. 80-82). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506964.

    Abstract

    Human ethologists have shown that humans are both attracted to others and at the same time fear them. They refer to this kind of fear with the technical term ‘social fear’ and claim that “it is alleviated with personal acquaintance but remains a principle characteristic of interpersonal behaviour. As a result, we maintain various degrees of greater distance between ourselves and others depending on the amount of confidence we have in the other” (Eibl-Eibesfeldt 1989: 335). The goal of this task is to conduct exploratory, heuristic research to establish a new subproject that – based on a corpus of video data – will investigate various forms of human spatial behaviour cross-culturally.
  • Senft, G. (1991). Prolegomena to the pragmatics of "situational-intentional" varieties in Kilivila language. In J. Verschueren (Ed.), Levels of linguistic adaptation: Selected papers from the International Pragmatics Conference, Antwerp, August 1987 (pp. 235-248). Amsterdam: John Benjamins.
  • Senft, G. (2019). Rituelle Kommunikation. In F. Liedtke, & A. Tuchen (Eds.), Handbuch Pragmatik (pp. 423-430). Stuttgart: J. B. Metzler. doi:10.1007/978-3-476-04624-6_41.

    Abstract

    Die Sprachwissenschaft hat den Begriff und das Konzept ›Rituelle Kommunikation‹ von der vergleichenden Verhaltensforschung übernommen. Humanethologen unterscheiden eine Reihe von sogenannten ›Ausdrucksbewegungen‹, die in der Mimik, der Gestik, der Personaldistanz (Proxemik) und der Körperhaltung (Kinesik) zum Ausdruck kommen. Viele dieser Ausdrucksbewegungen haben sich zu spezifischen Signalen entwickelt. Ethologen definieren Ritualisierung als Veränderung von Verhaltensweisen im Dienst der Signalbildung. Die zu Signalen ritualisierten Verhaltensweisen sind Rituale. Im Prinzip kann jede Verhaltensweise zu einem Signal werden, entweder im Laufe der Evolution oder durch Konventionen, die in einer bestimmten Gemeinschaft gültig sind, die solche Signale kulturell entwickelt hat und die von ihren Mitgliedern tradiert und gelernt werden.
  • Senft, G., Majid, A., & Levinson, S. C. (2007). The language of taste. In A. Majid (Ed.), Field Manual Volume 10 (pp. 42-45). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492913.
  • Senghas, A., Kita, S., & Ozyurek, A. (2004). Children creating core properties of language: Evidence from an emerging sign language in Nicaragua. Science, 305(5691), 1779-1782. doi:10.1126/science.1100199.

    Abstract

    A new sign language has been created by deaf Nicaraguans over the past 25 years, providing an opportunity to observe the inception of universal hallmarks of language. We found that in their initial creation of the language, children analyzed complex events into basic elements and sequenced these elements into hierarchically structured expressions according to principles not observed in gestures accompanying speech in the surrounding language. Successive cohorts of learners extended this procedure, transforming Nicaraguan signing from its early gestural form into a linguistic system. We propose that this early segmentation and recombination reflect mechanisms with which children learn, and thereby perpetuate, language. Thus, children naturally possess learning abilities capable of giving language its fundamental structure.
  • Senghas, A., Ozyurek, A., & Kita, S. (2005). [Response to comment on Children creating core properties of language: Evidence from an emerging sign language in Nicaragua]. Science, 309(5731), 56c-56c. doi:10.1126/science.1110901.
  • Seuren, P. A. M. (2005). The origin of grammatical terminology. In B. Smelik, R. Hofman, C. Hamans, & D. Cram (Eds.), A companion in linguistics: A Festschrift for Anders Ahlqvist on the occasion of his sixtieth birthday (pp. 185-196). Nijmegen: Stichting Uitgeverij de Keltische Draak.
  • Seuren, P. A. M. (2005). The role of lexical data in semantics. In A. Cruse, F. Hundsnurscher, M. Job, & P. R. Lutzeier (Eds.), Lexikologie / Lexicology. Ein internationales Handbuch zur Natur und Struktur von Wörtern und Wortschätzen/An international handbook on the nature and structure of words and vocabularies. 2. Halbband / Volume 2 (pp. 1690-1696). Berlin: Walter de Gruyter.
  • Seuren, P. A. M. (2007). The theory that dare not speak its name: A rejoinder to Mufwene and Francis. Language Sciences, 29(4), 571-573. doi:10.1016/j.langsci.2007.02.001.
  • Seuren, P. A. M. (2004). The importance of being modular. Journal of Linguistics, 40(3), 593-635. doi:10.1017/S0022226704002786.
  • Seuren, P. A. M. (1975). Autonomous syntax and prelexical rules. In S. De Vriendt, J. Dierickx, & M. Wilmet (Eds.), Grammaire générative et psychomécanique du langage: actes du colloque organisé par le Centre d'études linguistiques et littéraires de la Vrije Universiteit Brussel, Bruxelles, 29-31 mai 1974 (pp. 89-98). Paris: Didier.

Share this page