Publications

Displaying 201 - 300 of 354
  • Lenkiewicz, P., Pereira, M., Freire, M. M., & Fernandes, J. (2009). A new 3D image segmentation method for parallel architectures. In Proceedings of the 2009 IEEE International Conference on Multimedia and Expo [ICME 2009] June 28 – July 3, 2009, New York (pp. 1813-1816).

    Abstract

    This paper presents a novel model for 3D image segmentation and reconstruction. It has been designed with the aim to be implemented over a computer cluster or a multi-core platform. The required features include a nearly absolute independence between the processes participating in the segmentation task and providing amount of work as equal as possible for all the participants. As a result, it is avoid many drawbacks often encountered when performing a parallelization of an algorithm that was constructed to operate in a sequential manner. Furthermore, the proposed algorithm based on the new segmentation model is efficient and shows a very good, nearly linear performance growth along with the growing number of processing units.
  • Lenkiewicz, P., Pereira, M., Freire, M., & Fernandes, J. (2009). The dynamic topology changes model for unsupervised image segmentation. In Proceedings of the 11th IEEE International Workshop on Multimedia Signal Processing (MMSP'09) (pp. 1-5).

    Abstract

    Deformable models are a popular family of image segmentation techniques, which has been gaining significant focus in the last two decades, serving both for real-world applications as well as the base for research work. One of the features that the deformable models offer and that is considered a much desired one, is the ability to change their topology during the segmentation process. Using this characteristic it is possible to perform segmentation of objects with discontinuities in their bodies or to detect an undefined number of objects in the scene. In this paper we present our model for handling the topology changes in image segmentation methods based on the Active Volumes solution. The said model is capable of performing the changes in the structure of objects while the segmentation progresses, what makes it efficient and suitable for implementations over powerful execution environment, like multi-core architectures or computer clusters.
  • Lenkiewicz, P., Pereira, M., Freire, M., & Fernandes, J. (2009). The whole mesh Deformation Model for 2D and 3D image segmentation. In Proceedings of the 2009 IEEE International Conference on Image Processing (ICIP 2009) (pp. 4045-4048).

    Abstract

    In this paper we present a novel approach for image segmentation using Active Nets and Active Volumes. Those solutions are based on the Deformable Models, with slight difference in the method for describing the shapes of interests - instead of using a contour or a surface they represented the segmented objects with a mesh structure, which allows to describe not only the surface of the objects but also to model their interiors. This is obtained by dividing the nodes of the mesh in two categories, namely internal and external ones, which will be responsible for two different tasks. In our new approach we propose to negate this separation and use only one type of nodes. Using that assumption we manage to significantly shorten the time of segmentation while maintaining its quality.
  • Levelt, W. J. M. (2002). Phonological encoding in speech production: Comments on Jurafsky et al., Schiller et al., and van Heuven & Haan. In C. Gussenhoven, & N. Warner (Eds.), Laboratory phonology VII (pp. 87-99). Berlin: Mouton de Gruyter.
  • Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (2002). A theory of lexical access in speech production. In G. T. Altmann (Ed.), Psycholinguistics: critical concepts in psychology (pp. 278-377). London: Routledge.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (2009). Cognitive anthropology. In G. Senft, J. O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 50-57). Amsterdam: Benjamins.
  • Levinson, S. C. (2002). Appendix to the 2002 Supplement, version 1, for the “Manual” for the field season 2001. In S. Kita (Ed.), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 62-64). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C. (2009). Foreword. In J. Liep (Ed.), A Papuan plutocracy: Ranked exchange on Rossel Island (pp. ix-xxiii). Copenhagen: Aarhus University Press.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (2002). Landscape terms and place names in Yélî Dnye, the language of Rossel Island, PNG. In S. Kita (Ed.), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 8-13). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C. (2009). Language and mind: Let's get the issues straight! In S. D. Blum (Ed.), Making sense of language: Readings in culture and communication (pp. 95-104). Oxford: Oxford University Press.
  • Levinson, S. C. (2017). Living with Manny's dangerous idea. In G. Raymond, G. H. Lerner, & J. Heritage (Eds.), Enabling human conduct: Studies of talk-in-interaction in honor of Emanuel A. Schegloff (pp. 327-349). Amsterdam: Benjamins.
  • Levinson, S. C. (2017). Speech acts. In Y. Huang (Ed.), Oxford handbook of pragmatics (pp. 199-216). Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199697960.013.22.

    Abstract

    The essential insight of speech act theory was that when we use language, we perform actions—in a more modern parlance, core language use in interaction is a form of joint action. Over the last thirty years, speech acts have been relatively neglected in linguistic pragmatics, although important work has been done especially in conversation analysis. Here we review the core issues—the identifying characteristics, the degree of universality, the problem of multiple functions, and the puzzle of speech act recognition. Special attention is drawn to the role of conversation structure, probabilistic linguistic cues, and plan or sequence inference in speech act recognition, and to the centrality of deep recursive structures in sequences of speech acts in conversation

    Files private

    Request files
  • Levinson, S. C., & Majid, A. (2009). Preface and priorities. In A. Majid (Ed.), Field manual volume 12 (pp. III). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C., & Majid, A. (2009). The role of language in mind. In S. Nolen-Hoeksema, B. Fredrickson, G. Loftus, & W. Wagenaar (Eds.), Atkinson and Hilgard's introduction to psychology (15th ed., pp. 352). London: Cengage learning.
  • Little, H., Perlman, M., & Eryilmaz, K. (2017). Repeated interactions can lead to more iconic signals. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 760-765). Austin, TX: Cognitive Science Society.

    Abstract

    Previous research has shown that repeated interactions can cause iconicity in signals to reduce. However, data from several recent studies has shown the opposite trend: an increase in iconicity as the result of repeated interactions. Here, we discuss whether signals may become less or more iconic as a result of the modality used to produce them. We review several recent experimental results before presenting new data from multi-modal signals, where visual input creates audio feedback. Our results show that the growth in iconicity present in the audio information may come at a cost to iconicity in the visual information. Our results have implications for how we think about and measure iconicity in artificial signalling experiments. Further, we discuss how iconicity in real world speech may stem from auditory, kinetic or visual information, but iconicity in these different modalities may conflict.
  • Majid, A., van Leeuwen, T., & Dingemanse, M. (2009). Synaesthesia: A cross-cultural pilot. In A. Majid (Ed.), Field manual volume 12 (pp. 8-13). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883570.

    Abstract

    Synaesthesia is a condition in which stimulation of one sensory modality (e.g. hearing) causes additional experiences in a second, unstimulated modality (e.g. seeing colours). The goal of this task is to explore the types (and incidence) of synaesthesia in different cultures. Two simple tests can ascertain the existence of synaesthesia in your community.

    Additional information

    2009_Synaesthesia_audio_files.zip
  • Majid, A., & Enfield, N. J. (2017). Body. In H. Burkhardt, J. Seibt, G. Imaguire, & S. Gerogiorgakis (Eds.), Handbook of mereology (pp. 100-103). Munich: Philosophia.
  • Majid, A., Manko, P., & De Valk, J. (2017). Language of the senses. In S. Dekker (Ed.), Scientific breakthroughs in the classroom! (pp. 40-76). Nijmegen: Science Education Hub Radboud University.

    Abstract

    The project that we describe in this chapter has the theme ‘Language of the senses’. This theme is
    based on the research of Asifa Majid and her team regarding the influence of language and culture on
    sensory perception. The chapter consists of two sections. Section 2.1 describes how different sensory
    perceptions are spoken of in different languages. Teachers can use this section as substantive preparation
    before they launch this theme in the classroom. Section 2.2 describes how teachers can handle
    this theme in accordance with the seven phases of inquiry-based learning. Chapter 1, in which the
    general guideline of the seven phases is described, forms the basis for this. We therefore recommend
    the use of chapter 1 as the starting point for the execution of a project in the classroom. This chapter
    provides the thematic additions.

    Additional information

    Materials Language of the senses
  • Majid, A., Manko, P., & de Valk, J. (2017). Taal der Zintuigen. In S. Dekker, & J. Van Baren-Nawrocka (Eds.), Wetenschappelijke doorbraken de klas in! Molecuulbotsingen, Stress en Taal der Zintuigen (pp. 128-166). Nijmegen: Wetenschapsknooppunt Radboud Universiteit.

    Abstract

    Taal der zintuigen gaat over de invloed van taal en cultuur op zintuiglijke waarnemingen. Hoe omschrijf je wat je ziet, voelt, proeft of ruikt? In sommige culturen zijn er veel verschillende woorden voor kleur, in andere culturen juist weer heel weinig. Worden we geboren met deze verschillende kleurgroepen? En bepaalt hoe je ergens over praat ook wat je waarneemt?
  • Martin, A., & Van Turennout, M. (2002). Searching for the neural correlates of object priming. In L. R. Squire, & D. L. Schacter (Eds.), The Neuropsychology of Memory (pp. 239-247). New York: Guilford Press.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2017). Whether long-term tracking of speech rate affects perception depends on who is talking. In Proceedings of Interspeech 2017 (pp. 586-590). doi:10.21437/Interspeech.2017-1517.

    Abstract

    Speech rate is known to modulate perception of temporally ambiguous speech sounds. For instance, a vowel may be perceived as short when the immediate speech context is slow, but as long when the context is fast. Yet, effects of long-term tracking of speech rate are largely unexplored. Two experiments tested whether long-term tracking of rate influences perception of the temporal Dutch vowel contrast /ɑ/-/a:/. In Experiment 1, one low-rate group listened to 'neutral' rate speech from talker A and to slow speech from talker B. Another high-rate group was exposed to the same neutral speech from A, but to fast speech from B. Between-group comparison of the 'neutral' trials revealed that the low-rate group reported a higher proportion of /a:/ in A's 'neutral' speech, indicating that A sounded faster when B was slow. Experiment 2 tested whether one's own speech rate also contributes to effects of long-term tracking of rate. Here, talker B's speech was replaced by playback of participants' own fast or slow speech. No evidence was found that one's own voice affected perception of talker A in larger speech contexts. These results carry implications for our understanding of the mechanisms involved in rate-dependent speech perception and of dialogue.
  • Matsuo, A., & Duffield, N. (2002). Assessing the generality of knowledge about English ellipsis in SLA. In J. Costa, & M. J. Freitas (Eds.), Proceedings of the GALA 2001 Conference on Language Acquisition (pp. 49-53). Lisboa: Associacao Portuguesa de Linguistica.
  • Matsuo, A., & Duffield, N. (2002). Finiteness and parallelism: Assessing the generality of knowledge about English ellipsis in SLA. In B. Skarabela, S. Fish, & A.-H.-J. Do (Eds.), Proceedings of the 26th Boston University Conference on Language Development (pp. 197-207). Somerville, Massachusetts: Cascadilla Press.
  • Mauner, G., Koenig, J.-P., Melinger, A., & Bienvenue, B. (2002). The lexical source of unexpressed participants and their role in sentence and discourse understanding. In P. Merlo, & S. Stevenson (Eds.), The Lexical Basis of Sentence Processing: Formal, Computational and Experimental Issues (pp. 233-254). Amsterdam: John Benjamins.
  • McDonough, J., Lehnert-LeHouillier, H., & Bardhan, N. P. (2009). The perception of nasalized vowels in American English: An investigation of on-line use of vowel nasalization in lexical access. In Nasal 2009.

    Abstract

    The goal of the presented study was to investigate the use of coarticulatory vowel nasalization in lexical access by native speakers of American English. In particular, we compare the use of coart culatory place of articulation cues to that of coarticulatory vowel nasalization. Previous research on lexical access has shown that listeners use cues to the place of articulation of a postvocalic stop in the preceding vowel. However, vowel nasalization as cue to an upcoming nasal consonant has been argued to be a more complex phenomenon. In order to establish whether coarticulatory vowel nasalization aides in the process of lexical access in the same way as place of articulation cues do, we conducted two perception experiments: an off-line 2AFC discrimination task and an on-line eyetracking study using the visual world paradigm. The results of our study suggest that listeners are indeed able to use vowel nasalization in similar ways to place of articulation information, and that both types of cues aide in lexical access.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Monaghan, P., Brand, J., Frost, R. L. A., & Taylor, G. (2017). Multiple variable cues in the environment promote accurate and robust word learning. In G. Gunzelman, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 817-822). Retrieved from https://mindmodeling.org/cogsci2017/papers/0164/index.html.

    Abstract

    Learning how words refer to aspects of the environment is a complex task, but one that is supported by numerous cues within the environment which constrain the possibilities for matching words to their intended referents. In this paper we tested the predictions of a computational model of multiple cue integration for word learning, that predicted variation in the presence of cues provides an optimal learning situation. In a cross-situational learning task with adult participants, we varied the reliability of presence of distributional, prosodic, and gestural cues. We found that the best learning occurred when cues were often present, but not always. The effect of variability increased the salience of individual cues for the learner, but resulted in robust learning that was not vulnerable to individual cues’ presence or absence. Thus, variability of multiple cues in the language-learning environment provided the optimal circumstances for word learning.
  • Musgrave, S., & Cutfield, S. (2009). Language documentation and an Australian National Corpus. In M. Haugh, K. Burridge, J. Mulder, & P. Peters (Eds.), Selected proceedings of the 2008 HCSNet Workshop on Designing the Australian National Corpus: Mustering Languages (pp. 10-18). Somerville: Cascadilla Proceedings Project.

    Abstract

    Corpus linguistics and language documentation are usually considered separate subdisciplines within linguistics, having developed from different traditions and often operating on different scales, but the authors will suggest that there are commonalities to the two: both aim to represent language use in a community, and both are concerned with managing digital data. The authors propose that the development of the Australian National Corpus (AusNC) be guided by the experience of language documentation in the management of multimodal digital data and its annotation, and in ethical issues pertaining to making the data accessible. This would allow an AusNC that is distributed, multimodal, and multilingual, with holdings of text, audio, and video data distributed across multiple institutions; and including Indigenous, sign, and migrant community languages. An audit of language material held by Australian institutions and individuals is necessary to gauge the diversity and volume of possible content, and to inform common technical standards.
  • Narasimhan, B., & Brown, P. (2009). Getting the inside story: Learning to talk about containment in Tzeltal and Hindi. In V. C. Mueller-Gathercole (Ed.), Routes to language: Studies in honor of Melissa Bowerman (pp. 97-132). New York: Psychology Press.

    Abstract

    The present study examines young children's uses of semantically specific and general relational containment terms (e.g. in, enter) in Hindi and Tzeltal, and the extent to which their usage patterns are influenced by input frequency. We hypothesize that if children have a preference for relational terms that are semantically specific, this will be reflected in early acquisition of more semantically specific expressions and underextension of semantically general ones, regardless of the distributional patterns of use of these terms in the input. Our findings however show a strong role for input frequency in guiding children's patterns of use of containment terms in the two languages. Yet language-specific lexicalization patterns play a role as well, since object-specific containment verbs are used as early as the semantically general 'enter' verb by children acquiring Tzeltal.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • O'Meara, C., & Majid, A. (2017). El léxico olfativo en la lengua seri. In A. L. M. D. Ruiz, & A. Z. Pérez (Eds.), La Dimensión Sensorial de la Cultura: Diez contribuciones al estudio de los sentidos en México. (pp. 101-118). Mexico City: Universidad Autónoma Metropolitana.
  • Oostdijk, N., Goedertier, W., Van Eynde, F., Boves, L., Martens, J.-P., Moortgat, M., & Baayen, R. H. (2002). Experiences from the Spoken Dutch Corpus Project. In Third international conference on language resources and evaluation (pp. 340-347). Paris: European Language Resources Association.
  • Ortega, G., Schiefner, A., & Ozyurek, A. (2017). Speakers’ gestures predict the meaning and perception of iconicity in signs. In G. Gunzelmann, A. Howe, & T. Tenbrink (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 889-894). Austin, TX: Cognitive Science Society.

    Abstract

    Sign languages stand out in that there is high prevalence of
    conventionalised linguistic forms that map directly to their
    referent (i.e., iconic). Hearing adults show low performance
    when asked to guess the meaning of iconic signs suggesting
    that their iconic features are largely inaccessible to them.
    However, it has not been investigated whether speakers’
    gestures, which also share the property of iconicity, may
    assist non-signers in guessing the meaning of signs. Results
    from a pantomime generation task (Study 1) show that
    speakers’ gestures exhibit a high degree of systematicity, and
    share different degrees of form overlap with signs (full,
    partial, and no overlap). Study 2 shows that signs with full
    and partial overlap are more accurately guessed and are
    assigned higher iconicity ratings than signs with no overlap.
    Deaf and hearing adults converge in their iconic depictions
    for some concepts due to the shared conceptual knowledge
    and manual-visual modality.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A. (2017). Function and processing of gesture in the context of language. In R. B. Church, M. W. Alibali, & S. D. Kelly (Eds.), Why gesture? How the hands function in speaking, thinking and communicating (pp. 39-58). Amsterdam: John Benjamins Publishing. doi:10.1075/gs.7.03ozy.

    Abstract

    Most research focuses function of gesture independent of its link to the speech it accompanies and the coexpressive functions it has together with speech. This chapter instead approaches gesture in relation to its communicative function in relation to speech, and demonstrates how it is shaped by the linguistic encoding of a speaker’s message. Drawing on crosslinguistic research with adults and children as well as bilinguals on iconic/pointing gesture production it shows that the specific language speakers use modulates the rate and the shape of the iconic gesture production of the same events. The findings challenge the claims aiming to understand gesture’s function for “thinking only” in adults and during development.
  • Ozyurek, A. (2002). Speech-gesture relationship across languages and in second language learners: Implications for spatial thinking and speaking. In B. Skarabela, S. Fish, & A. H. Do (Eds.), Proceedings of the 26th annual Boston University Conference on Language Development (pp. 500-509). Somerville, MA: Cascadilla Press.
  • Pacheco, A., Araújo, S., Faísca, L., Petersson, K. M., & Reis, A. (2009). Profiling dislexic children: Phonology and visual naming skills. In Abstracts presented at the International Neuropsychological Society, Finnish Neuropsychological Society, Joint Mid-Year Meeting July 29-August 1, 2009. Helsinki, Finland & Tallinn, Estonia (pp. 40). Retrieved from http://www.neuropsykologia.fi/ins2009/INS_MY09_Abstract.pdf.
  • Perlman, M., Fusaroli, R., Fein, D., & Naigles, L. (2017). The use of iconic words in early child-parent interactions. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 913-918). Austin, TX: Cognitive Science Society.

    Abstract

    This paper examines the use of iconic words in early conversations between children and caregivers. The longitudinal data include a span of six observations of 35 children-parent dyads in the same semi-structured activity. Our findings show that children’s speech initially has a high proportion of iconic words, and over time, these words become diluted by an increase of arbitrary words. Parents’ speech is also initially high in iconic words, with a decrease in the proportion of iconic words over time – in this case driven by the use of fewer iconic words. The level and development of iconicity are related to individual differences in the children’s cognitive skills. Our findings fit with the hypothesis that iconicity facilitates early word learning and may play an important role in learning to produce new words.
  • Petersson, K. M. (2002). Brain physiology. In R. Behn, & C. Veranda (Eds.), Proceedings of The 4th Southern European School of the European Physical Society - Physics in Medicine (pp. 37-38). Montreux: ESF.
  • Petersson, K. M., Ingvar, M., & Reis, A. (2009). Language and literacy from a cognitive neuroscience perspective. In D. Olsen, & N. Torrance (Eds.), Cambridge handbook of literacy (pp. 152-181). Cambridge: Cambridge University Press.
  • Popov, V., Ostarek, M., & Tenison, C. (2017). Inferential Pitfalls in Decoding Neural Representations. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 961-966). Austin, TX: Cognitive Science Society.

    Abstract

    A key challenge for cognitive neuroscience is to decipher the representational schemes of the brain. A recent class of decoding algorithms for fMRI data, stimulus-feature-based encoding models, is becoming increasingly popular for inferring the dimensions of neural representational spaces from stimulus-feature spaces. We argue that such inferences are not always valid, because decoding can occur even if the neural representational space and the stimulus-feature space use different representational schemes. This can happen when there is a systematic mapping between them. In a simulation, we successfully decoded the binary representation of numbers from their decimal features. Since binary and decimal number systems use different representations, we cannot conclude that the binary representation encodes decimal features. The same argument applies to the decoding of neural patterns from stimulus-feature spaces and we urge caution in inferring the nature of the neural code from such methods. We discuss ways to overcome these inferential limitations.
  • Pouw, W., Aslanidou, A., Kamermans, K. L., & Paas, F. (2017). Is ambiguity detection in haptic imagery possible? Evidence for Enactive imaginings. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 2925-2930). Austin, TX: Cognitive Science Society.

    Abstract

    A classic discussion about visual imagery is whether it affords reinterpretation, like discovering two interpretations in the duck/rabbit illustration. Recent findings converge on reinterpretation being possible in visual imagery, suggesting functional equivalence with pictorial representations. However, it is unclear whether such reinterpretations are necessarily a visual-pictorial achievement. To assess this, 68 participants were briefly presented 2-d ambiguous figures. One figure was presented visually, the other via manual touch alone. Afterwards participants mentally rotated the memorized figures as to discover a novel interpretation. A portion (20.6%) of the participants detected a novel interpretation in visual imagery, replicating previous research. Strikingly, 23.6% of participants were able to reinterpret figures they had only felt. That reinterpretation truly involved haptic processes was further supported, as some participants performed co-thought gestures on an imagined figure during retrieval. These results are promising for further development of an Enactivist approach to imagination.
  • Ramus, F., & Fisher, S. E. (2009). Genetics of language. In M. S. Gazzaniga (Ed.), The cognitive neurosciences, 4th ed. (pp. 855-871). Cambridge, MA: MIT Press.

    Abstract

    It has long been hypothesised that the human faculty to acquire a language is in some way encoded in our genetic program. However, only recently has genetic evidence been available to begin to substantiate the presumed genetic basis of language. Here we review the first data from molecular genetic studies showing association between gene variants and language disorders (specific language impairment, speech sound disorder, developmental dyslexia), we discuss the biological function of these genes, and we further speculate on the more general question of how the human genome builds a brain that can learn a language.
  • Rapold, C. J., & Zaugg-Coretti, S. (2009). Exploring the periphery of the central Ethiopian Linguistic area: Data from Yemsa and Benchnon. In J. Crass, & R. Meyer (Eds.), Language contact and language change in Ethiopia (pp. 59-81). Köln: Köppe.
  • Reesink, G. (2002). The Eastern bird's head languages. In G. Reesink (Ed.), Languages of the Eastern Bird's Head (pp. 1-44). Canberra: Pacific Linguistics.
  • Reesink, G. (2009). A connection between Bird's Head and (Proto) Oceanic. In B. Evans (Ed.), Discovering history through language, papers in honor of Malcolm Ross (pp. 181-192). Canberra: Pacific Linguistics.
  • Reesink, G. (2002). A grammar sketch of Sougb. In G. Reesink (Ed.), Languages of the Eastern Bird's Head (pp. 181-275). Canberra: Pacific Linguistics.
  • Reesink, G. (2002). Mansim, a lost language of the Bird's Head. In G. Reesink (Ed.), Languages of the Eastern Bird's Head (pp. 277-340). Canberra: Pacific Linguistics.
  • Ringersma, J., Zinn, C., & Kemps-Snijders, M. (2009). LEXUS & ViCoS From lexical to conceptual spaces. In 1st International Conference on Language Documentation and Conservation (ICLDC).

    Abstract

    LEXUS and ViCoS: from lexicon to conceptual spaces LEXUS is a web-based lexicon tool and the knowledge space software ViCoS is an extension of LEXUS, allowing users to create relations between objects in and across lexica. LEXUS and ViCoS are part of the Language Archiving Technology software, developed at the MPI for Psycholinguistics to archive and enrich linguistic resources collected in the framework of language documentation projects. LEXUS is of primary interest for language documentation, offering the possibility to not just create a digital dictionary, but additionally it allows the creation of multi-media encyclopedic lexica. ViCoS provides an interface between the lexical space and the ontological space. Its approach permits users to model a world of concepts and their interrelations based on categorization patterns made by the speech community. We describe the LEXUS and ViCoS functionalities using three cases from DoBeS language documentation projects: (1) Marquesan The Marquesan lexicon was initially created in Toolbox and imported into LEXUS using the Toolbox import functionality. The lexicon is enriched with multi-media to illustrate the meaning of the words in its cultural environment. Members of the speech community consider words as keys to access and describe relevant parts of their life and traditions. Their understanding of words is best described by the various associations they evoke rather than in terms of any formal theory of meaning. Using ViCoS a knowledge space of related concepts is being created. (2) Kola-Sámi Two lexica are being created in LEXUS: RuSaDic lexicon is a Russian-Kildin wordlist in which the entries are of relative limited structure and content. SaRuDiC is a more complex structured lexicon with much richer content, including multi-media fragments and derivations. Using ViCoS we have created a connection between the two lexica, so that speakers who are familiair with Russian and wish to revitalize their Kildin can enter the lexicon through the RuSaDic and from there approach the informative SaRuDic. Similary we will create relations from the two lexica to external open databases, like e.g. Álgu. (3) Beaver A speaker database including kinship relations has been created and the database has been imported into LEXUS. In the LEXUS views the relations for individual speakers are being displayed. Using ViCoS the relational information from the database will be extracted to form a kisnhip relation space with specific relation types, like e.g 'mother-of'. The whole set of relations from the database can be displayed in one ViCoS relation window, and zoom functionality is available.
  • Roelofs, A. (2002). Storage and computation in spoken word production. In S. Nooteboom, F. Weerman, & F. Wijnen (Eds.), Storage and computation in the language faculty (pp. 183-216). Dordrecht: Kluwer.
  • Roelofs, A. (2002). Modeling of lexical access in speech production: A psycholinguistic perspective on the lexicon. In L. Behrens, & D. Zaefferer (Eds.), The lexicon in focus: Competition and convergence in current lexicology (pp. 75-92). Frankfurt am Main: Lang.
  • Rojas-Berscia, L. M., & Shi, J. A. (2017). Hakka as spoken in Suriname. In K. Yakpo, & P. C. Muysken (Eds.), Boundaries and bridges: Language contact in multilingual ecologies (pp. 179-196). Berlin: De Gruyter.
  • Rossano, F., Brown, P., & Levinson, S. C. (2009). Gaze, questioning and culture. In J. Sidnell (Ed.), Conversation analysis: Comparative perspectives (pp. 187-249). Cambridge University Press.

    Abstract

    Relatively little work has examined the function of gaze in interaction. Previous research has mainly addressed issues such as next speaker selection (e.g. Lerner 2003) or engagement and disengagement in the conversation (Goodwin 1981). It has looked for gaze behavior in relation to the roles participants are enacting locally, (e.g., speaker or hearer) and in relation to the unit “turn” in the turn taking system (Goodwin 1980, 1981; Kendon 1967). In his seminal work Kendon (1967) claimed that “there is a very clear and quite consistent pattern, namely, that [the speaker] tends to look away as he begins a long utterance, and in many cases somewhat in advance of it; and that he looks up at his interlocutor as the end of the long utterance approaches, usually during the last phase, and he continues to look thereafter.” Goodwin (Goodwin 1980) introducing the listener into the picture proposed the following two rules: Rule1: A speaker should obtain the gaze of his recipient during the course of a turn of talk. Rule2: a recipient should be gazing at the speaker when the speaker is gazing at the hearer. Rossano’s work (2005) has suggested the possibility of a different level of order for gaze in interaction: the sequential level. In particular he found that gaze withdrawal after sustained mutual gaze tends to occur at sequence possible completion and if both participants withdraw the sequence is complete. By sequence here we refer to a unit that is structured around the notion of adjacency pair. The latter refers to two turns uttered by different speakers orderly organized (first part and second part) and pair type related (greeting-greeting, question-answer). These two turns are related by conditional relevance (Schegloff 1968) that is to say that the first part requires the production of the second and the absence of the latter is noticeable and accountable. Question-anwers are very typical examples of adjacency pairs. In this paper we compare the use of gaze in question-answer sequences in three different populations: Italians, speakers of Mayan Tzeltal (Mexico) and speakers of Yeli Ndye (Russel Island, Papua New Guinea). Relying mainly on dyadic interactions and ordinary conversation we will provide a comparison of the occurrence of gaze in each turn (to compare with the claims of Goodwin and Kendon) and we will describe whether gaze has any effect on the other participant response and whether it persists also during the answer. The three languages and cultures that will be compared here belong to three different continents and have been previously described as potentially following opposite rules: for speakers of Italian and Yeli Ndye unproblematic and preferred engagement of mutual gaze while for speakers of Tzeltal strong mutual gaze avoidance. This paper tries to provide an accurate description of their gaze behavior in this specific type of sequential conversation.
  • Rossi, G., & Zinken, J. (2017). Social agency and grammar. In N. J. Enfield, & P. Kockelman (Eds.), Distributed agency: The sharing of intention, cause, and accountability (pp. 79-86). New York: Oxford University Press.

    Abstract

    One of the most conspicuous ways in which people distribute agency among each other is by asking another for help. Natural languages give people a range of forms to do this, the distinctions among which have consequences for how agency is distributed. Forms such as imperatives (e.g. ‘pass the salt’) and recurrent types of interrogatives (e.g. ‘can you pass the salt?’) designate another person as the doer of the action. In contrast to this, impersonal deontic statements (e.g. ‘it is necessary to get the salt’) express the need for an action without tying it to any particular individual. This can generate interactions in which the identity of the doer must be sorted out among participants, allowing us to observe the distribution of agency in vivo. The case of impersonal deontic statements demonstrates the importance of grammar as a resource for managing human action and sociality.
  • Rossi, G. (2017). Secondary and deviant uses of the imperative for requesting in Italian. In M.-L. Sorjonen, L. Raevaara, & E. Couper-Kuhlen (Eds.), Imperative turns at talk: The design of directives in action (pp. 103-137). Amsterdam: John Benjamins.

    Abstract

    The use of the imperative for requesting has been mostly explained on the basis of estimations of social distance, relative power, and entitlement. More recent research, however, has identified other selection factors to do with the functional and sequential relation of the action requested to the trajectory of the ongoing interaction. In everyday activities among family and friends, the imperative is typically warranted by an earlier commitment of the requestee to a joint project or shared goal which the action requested contributes to. The chapter argues this to be the primary use of the imperative for requesting in Italian informal interaction, and distinguishes it from other uses of the imperative that do not conform to the predominant pattern. These other uses are of two kinds: (i) secondary, that is, less frequent and formally marked imperatives that still orient to social-interactional conditions supporting an expectation of compliance, and (ii) deviant, where the imperative is selected in deliberate violation of the social-interactional conditions that normally support it, attracting special attention and accomplishing more than just requesting. This study extends prior findings on the functional distribution of imperative requests and makes a point of relating and classifying distinct uses of a same form of action, offering new insights into more general aspects of language use such as markedness and normativity.
  • Saito, H., & Kita, S. (2002). "Jesuchaa, kooi, imi" no hennshuu ni atat te [On the occasion of editing "Jesuchaa, Kooi, imi"]. In H. Saito, & S. Kita (Eds.), Kooi, jesuchaa, imi [Action, gesture, meaning] (pp. v-xi). Tokyo: Kyooritsu Shuppan.
  • Salomo, D., & Liszkowski, U. (2009). Socialisation of prelinguistic communication. In A. Majid (Ed.), Field manual volume 12 (pp. 56-57). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.844597.

    Abstract

    Little is known about cultural differences in interactional practices with infants. The goal of this task is to document the nature and emergence of caregiver-infant interaction/ communication in different cultures. There are two tasks: Task 1 – a brief documentation about the culture under investigation with respect to infant-caregiver interaction and parental beliefs. Task 2 – the “decorated room”, a task designed to elicit infant and caregiver.
  • Sankoff, G., & Brown, P. (2009). The origins of syntax in discourse: A case study of Tok Pisin relatives [reprint of 1976 article in Language]. In J. Holm, & S. Michaelis (Eds.), Contact languages (vol. II) (pp. 433-476). London: Routledge.
  • Sauter, D. (2009). Emotion concepts. In A. Majid (Ed.), Field manual volume 12 (pp. 20-30). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883578.

    Abstract

    The goal of this task is to investigate emotional categories across linguistic and cultural boundaries. There are three core tasks. In order to conduct this task you will need emotional vocalisation stimuli on your computer and you must translate the scenarios at the end of this entry into your local language.
  • Sauter, D., Eisner, F., Ekman, P., & Scott, S. K. (2009). Universal vocal signals of emotion. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (CogSci 2009) (pp. 2251-2255). Cognitive Science Society.

    Abstract

    Emotional signals allow for the sharing of important information with conspecifics, for example to warn them of danger. Humans use a range of different cues to communicate to others how they feel, including facial, vocal, and gestural signals. Although much is known about facial expressions of emotion, less research has focused on affect in the voice. We compare British listeners to individuals from remote Namibian villages who have had no exposure to Western culture, and examine recognition of non-verbal emotional vocalizations, such as screams and laughs. We show that a number of emotions can be universally recognized from non-verbal vocal signals. In addition we demonstrate the specificity of this pattern, with a set of additional emotions only recognized within, but not across these cultural groups. Our findings indicate that a small set of primarily negative emotions have evolved signals across several modalities, while most positive emotions are communicated with culture-specific signals.
  • Scharenborg, O., Boves, L., & de Veth, J. (2002). ASR in a human word recognition model: Generating phonemic input for Shortlist. In J. H. L. Hansen, & B. Pellom (Eds.), ICSLP 2002 - INTERSPEECH 2002 - 7th International Conference on Spoken Language Processing (pp. 633-636). ISCA Archive.

    Abstract

    The current version of the psycholinguistic model of human word recognition Shortlist suffers from two unrealistic constraints. First, the input of Shortlist must consist of a single string of phoneme symbols. Second, the current version of the search in Shortlist makes it difficult to deal with insertions and deletions in the input phoneme string. This research attempts to fully automatically derive a phoneme string from the acoustic signal that is as close as possible to the number of phonemes in the lexical representation of the word. We optimised an Automatic Phone Recogniser (APR) using two approaches, viz. varying the value of the mismatch parameter and optimising the APR output strings on the output of Shortlist. The approaches show that it will be very difficult to satisfy the input requirements of the present version of Shortlist with a phoneme string generated by an APR.
  • Scharenborg, O., & Okolowski, S. (2009). Lexical embedding in spoken Dutch. In INTERSPEECH 2009 - 10th Annual Conference of the International Speech Communication Association (pp. 1879-1882). ISCA Archive.

    Abstract

    A stretch of speech is often consistent with multiple words, e.g., the sequence /hæm/ is consistent with ‘ham’ but also with the first syllable of ‘hamster’, resulting in temporary ambiguity. However, to what degree does this lexical embedding occur? Analyses on two corpora of spoken Dutch showed that 11.9%-19.5% of polysyllabic word tokens have word-initial embedding, while 4.1%-7.5% of monosyllabic word tokens can appear word-initially embedded. This is much lower than suggested by an analysis of a large dictionary of Dutch. Speech processing thus appears to be simpler than one might expect on the basis of statistics on a dictionary.
  • Scharenborg, O., & Boves, L. (2002). Pronunciation variation modelling in a model of human word recognition. In Pronunciation Modeling and Lexicon Adaptation for Spoken Language Technology [PMLA-2002] (pp. 65-70).

    Abstract

    Due to pronunciation variation, many insertions and deletions of phones occur in spontaneous speech. The psycholinguistic model of human speech recognition Shortlist is not well able to deal with phone insertions and deletions and is therefore not well suited for dealing with real-life input. The research presented in this paper explains how Shortlist can benefit from pronunciation variation modelling in dealing with real-life input. Pronunciation variation was modelled by including variants into the lexicon of Shortlist. A series of experiments was carried out to find the optimal acoustic model set for transcribing the training material that was used as basis for the generation of the variants. The Shortlist experiments clearly showed that Shortlist benefits from pronunciation variation modelling. However, the performance of Shortlist stays far behind the performance of other, more conventional speech recognisers.
  • Scharenborg, O. (2009). Using durational cues in a computational model of spoken-word recognition. In INTERSPEECH 2009 - 10th Annual Conference of the International Speech Communication Association (pp. 1675-1678). ISCA Archive.

    Abstract

    Evidence that listeners use durational cues to help resolve temporarily ambiguous speech input has accumulated over the past few years. In this paper, we investigate whether durational cues are also beneficial for word recognition in a computational model of spoken-word recognition. Two sets of simulations were carried out using the acoustic signal as input. The simulations showed that the computational model, like humans, takes benefit from durational cues during word recognition, and uses these to disambiguate the speech signal. These results thus provide support for the theory that durational cues play a role in spoken-word recognition.
  • Schiller, N. O., Costa, A., & Colomé, A. (2002). Phonological encoding of single words: In search of the lost syllable. In C. Gussenhoven, & N. Warner (Eds.), Laboratory Phonology VII (pp. 35-59). Berlin: Mouton de Gruyter.
  • Schiller, N. O., & Verdonschot, R. G. (2017). Is bilingual speech production language-specific or non-specific? The case of gender congruency in Dutch – English bilinguals. In H. Reckman, L.-L.-S. Cheng, M. Hijzelendoorn, & R. Sybesma (Eds.), Crossroads semantics: Computation, experiment and grammar (pp. 139-154). Amsterdam: Benjamins.

    Abstract

    The present paper looks at semantic interference and gender congruency effects during bilingual picture-word naming. According to Costa, Miozzo & Caramazza (1999), only the activation from lexical nodes within a language is considered during lexical selection. If this is accurate, these findings should uphold with respect to semantic and gender/determiner effects even though the distractors are in another language. In the present study three effects were found, (1) a main effect of language, (2) semantic effects for both target language and non-target language distractors, and (3) gender congruency effects for targets with target-language distractors only. These findings are at odds with the language-specific proposal of Costa et al. (1999). Implications of these findings are discussed.
  • Schiller, N. O., Schmitt, B., Peters, J., & Levelt, W. J. M. (2002). 'BAnana'or 'baNAna'? Metrical encoding during speech production [Abstract]. In M. Baumann, A. Keinath, & J. Krems (Eds.), Experimentelle Psychologie: Abstracts der 44. Tagung experimentell arbeitender Psychologen. (pp. 195). TU Chemnitz, Philosophische Fakultät.

    Abstract

    The time course of metrical encoding, i.e. stress, during speech production is investigated. In a first experiment, participants were presented with pictures whose bisyllabic Dutch names had initial or final stress (KAno 'canoe' vs. kaNON 'cannon'; capital letters indicate stressed syllables). Picture names were matched for frequency and object recognition latencies. When participants were asked to judge whether picture names had stress on the first or second syllable, they showed significantly faster decision times for initially stressed targets than for targets with final stress. Experiment 2 replicated this effect with trisyllabic picture names (faster RTs for penultimate stress than for ultimate stress). In our view, these results reflect the incremental phonological encoding process. Wheeldon and Levelt (1995) found that segmental encoding is a process running from the beginning to the end of words. Here, we present evidence that the metrical pattern of words, i.e. stress, is also encoded incrementally.
  • Schiller, N. O. (2002). From phonetics to cognitive psychology: Psycholinguistics has it all. In A. Braun, & H. Masthoff (Eds.), Phonetics and its Applications. Festschrift for Jens-Peter Köster on the Occasion of his 60th Birthday. [Beihefte zur Zeitschrift für Dialektologie und Linguistik; 121] (pp. 13-24). Stuttgart: Franz Steiner Verlag.
  • Schimke, S. (2009). Does finiteness mark assertion? A picture selection study with Turkish learners and native speakers of German. In C. Dimroth, & P. Jordens (Eds.), Functional categories in learner language (pp. 169-202). Berlin: Mouton de Gruyter.
  • Schmiedtová, V., & Schmiedtová, B. (2002). The color spectrum in language: The case of Czech: Cognitive concepts, new idioms and lexical meanings. In H. Gottlieb, J. Mogensen, & A. Zettersten (Eds.), Proceedings of The 10th International Symposium on Lexicography (pp. 285-292). Tübingen: Max Niemeyer Verlag.

    Abstract

    The representative corpus SYN2000 in the Czech National Corpus (CNK) project containing 100 million word forms taken from different types of texts. I have tried to determine the extent and depth of the linguistic material in the corpus. First, I chose the adjectives indicating the basic colors of the spectrum and other parts of speech (names and adverbs) derived from these adjectives. An analysis of three examples - black, white and red - shows the extent of the linguistic wealth and diversity we are looking at: because of size limitations, no existing dictionary is capable of embracing all analyzed nuances. Currently, we can only hope that the next dictionary of contemporary Czech, built on the basis of the Czech National Corpus, will be electronic. Without the size limitations, we would be able us to include many of the fine nuances of language
  • Schriefers, H., Meyer, A. S., & Levelt, W. J. M. (2002). Exploring the time course of lexical access in language production: Picture word interference studies. In G. Altmann (Ed.), Psycholinguistics: Critical Concepts in Psychology [vol. 5] (pp. 168-191). London: Routledge.
  • Schuller, B., Steidl, S., Batliner, A., Bergelson, E., Krajewski, J., Janott, C., Amatuni, A., Casillas, M., Seidl, A., Soderstrom, M., Warlaumont, A. S., Hidalgo, G., Schnieder, S., Heiser, C., Hohenhorst, W., Herzog, M., Schmitt, M., Qian, K., Zhang, Y., Trigeorgis, G. and 2 moreSchuller, B., Steidl, S., Batliner, A., Bergelson, E., Krajewski, J., Janott, C., Amatuni, A., Casillas, M., Seidl, A., Soderstrom, M., Warlaumont, A. S., Hidalgo, G., Schnieder, S., Heiser, C., Hohenhorst, W., Herzog, M., Schmitt, M., Qian, K., Zhang, Y., Trigeorgis, G., Tzirakis, P., & Zafeiriou, S. (2017). The INTERSPEECH 2017 computational paralinguistics challenge: Addressee, cold & snoring. In Proceedings of Interspeech 2017 (pp. 3442-3446). doi:10.21437/Interspeech.2017-43.

    Abstract

    The INTERSPEECH 2017 Computational Paralinguistics Challenge addresses three different problems for the first time in research competition under well-defined conditions: In the Addressee sub-challenge, it has to be determined whether speech produced by an adult is directed towards another adult or towards a child; in the Cold sub-challenge, speech under cold has to be told apart from ‘healthy’ speech; and in the Snoring subchallenge, four different types of snoring have to be classified. In this paper, we describe these sub-challenges, their conditions, and the baseline feature extraction and classifiers, which include data-learnt feature representations by end-to-end learning with convolutional and recurrent neural networks, and bag-of-audiowords for the first time in the challenge series
  • Schuppler, B., Van Dommelen, W., Koreman, J., & Ernestus, M. (2009). Word-final [t]-deletion: An analysis on the segmental and sub-segmental level. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 2275-2278). Causal Productions Pty Ltd.

    Abstract

    This paper presents a study on the reduction of word-final [t]s in conversational standard Dutch. Based on a large amount of tokens annotated on the segmental level, we show that the bigram frequency and the segmental context are the main predictors for the absence of [t]s. In a second study, we present an analysis of the detailed acoustic properties of word-final [t]s and we show that bigram frequency and context also play a role on the subsegmental level. This paper extends research on the realization of /t/ in spontaneous speech and shows the importance of incorporating sub-segmental properties in models of speech.
  • Scott, S. K., Sauter, D., & McGettigan, C. (2009). Brain mechanisms for processing perceived emotional vocalizations in humans. In S. M. Brudzynski (Ed.), Handbook of mammalian vocalization: An integrative neuroscience approach (pp. 187-198). London: Academic Press.

    Abstract

    Humans express emotional information in their facial expressions and body movements, as well as in their voice. In this chapter we consider the neural processing of a specific kind of vocal expressions, non-verbal emotional vocalizations e.g. laughs and sobs. We outline evidence, from patient studies and functional imaging studies, for both emotion specific and more general processing of emotional information in the voice. We relate these findings to evidence for both basic and dimensional accounts of the representations of emotion. We describe in detail an fMRI study of positive and negative non-verbal expressions of emotion, which revealed that prefrontal areas involved in the control of oro-facial movements were also sensitive to different kinds of vocal emotional information.
  • Seifart, F. (2002). Shape-distinctions picture-object matching task, with 2002 supplement. In S. Kita (Ed.), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 15-17). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Sekine, K. (2017). Gestural hesitation reveals children’s competence on multimodal communication: Emergence of disguised adaptor. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 3113-3118). Austin, TX: Cognitive Science Society.

    Abstract

    Speakers sometimes modify their gestures during the process of production into adaptors such as hair touching or eye scratching. Such disguised adaptors are evidence that the speaker can monitor their gestures. In this study, we investigated when and how disguised adaptors are first produced by children. Sixty elementary school children participated in this study (ten children in each age group; from 7 to 12 years old). They were instructed to watch a cartoon and retell it to their parents. The results showed that children did not produce disguised adaptors until the age of 8. The disguised adaptors accompany fluent speech until the children are 10 years old and accompany dysfluent speech until they reach 11 or 12 years of age. These results suggest that children start to monitor their gestures when they are 9 or 10 years old. Cognitive changes were considered as factors to influence emergence of disguised adaptors
  • Senft, G. (2002). What should the ideal online-archive documenting linguistic data of various (endangered) languages and cultures offer to interested parties? Some ideas of a technically naive linguistic field researcher and potential user. In P. Austin, H. Dry, & P. Wittenburg (Eds.), Proceedings of the international LREC workshop on resources and tools in field linguistics (pp. 11-15). Paris: European Language Resources Association.
  • Senft, G. (2009). Bronislaw Kasper Malinowski. In G. Senft, J.-O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 210-225). Amsterdam: John Benjamins.
  • Senft, G. (2009). Elicitation. In G. Senft, J.-O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 105-109). Amsterdam: John Benjamins.
  • Senft, G. (2017). "Control your emotions! If teasing provokes you, you've lost your face.." The Trobriand Islanders' control of their public display of emotions. In A. Storch (Ed.), Consensus and Dissent: Negotiating Emotion in the Public Space (pp. 59-80). Amsterdam: John Benjamins.

    Abstract

    Kilivila, the Austronesian language of the Trobriand Islanders of Papua New Guinea, has a rich inventory of terms - nouns, verbs, adjectives and idiomatic phrases and expressions - to precisely refer to, and to differentiate emotions and inner feelings. This paper describes how the Trobriand Islanders of Papua New Guinea deal with the public display of emotions. Forms of emotion control in public encounters are discussed and explained on the basis of ritual communication which pervades the Trobrianders' verbal and non-verbal behaviour. Especially highlighted is the Trobrianders' metalinguistic concept of "biga sopa" with its important role for emotion control in encounters that may run the risk of escalating from argument and conflict to aggression and violence.
  • Senft, G. (1998). 'Noble Savages' and the 'Islands of Love': Trobriand Islanders in 'Popular Publications'. In J. Wassmann (Ed.), Pacific answers to Western hegemony: Cultural practices of identity construction (pp. 119-140). Oxford: Berg Publishers.
  • Senft, G. (2002). Feldforschung in einer deutschen Fabrik - oder: Trobriand ist überall. In H. Fischer (Ed.), Feldforschungen. Erfahrungsberichte zur Einführung (Neufassung) (pp. 207-226). Berlin: Reimer.
  • Senft, G. (2002). Linguistische Feldforschung. In H. M. Müller (Ed.), Arbeitsbuch Linguistik (pp. 353-363). Paderborn: Schöningh UTB.
  • Senft, G. (2009). Fieldwork. In G. Senft, J.-O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 131-139). Amsterdam: John Benjamins.
  • Senft, G. (2009). Linguistische Feldforschung. In H. M. Müller (Ed.), Arbeitsbuch Linguistik (2nd rev. ed., pp. 353-363). Paderborn: Schöningh UTB.

    Abstract

    This article provides a brief introduction into field research, its aims, its methods and the various phases of fieldwork.
  • Senft, G. (2009). Introduction. In G. Senft, J.-O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 1-17). Amsterdam: John Benjamins.
  • Senft, G. (2017). Expressions for emotions - and inner feelings - in Kilivila, the language of the Trobriand Islanders: A descriptive and methodological critical essay. In N. Tersis, & P. Boyeldieu (Eds.), Le langage de l'emotion: Variations linguistiques et culturelles (pp. 349-376). Paris: Peeters.

    Abstract

    This paper reports on the results of my research on the lexical means Kilivila offers its speakers to refer to emotions and inner feelings. Data were elicited with 18 “Ekman’s faces” in which photos of the faces of one woman and two men illustrate the allegedly universal basic emotions (anger, disgust, fear, happiness, sadness, surprise) and with film stimuli staging standard emotions. The data are discussed on the basis of the following research questions: * How “effable” are they or do we observe ineffability – the difficulty of putting experiences into words – within the domain of emotions? * Do consultants agree with one another in how they name emotions? * Are facial expressions or situations better cues for labeling?
  • Senft, G. (2009). Phatic communion. In G. Senft, J.-O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 226-233). Amsterdam: John Benjamins.
  • Senft, G. (2017). The Coral Gardens are Losing Their Magic: The Social and Cultural Impact of Climate Change and Overpopulation for the Trobriand Islanders. In A. T. von Poser, & A. von Poser (Eds.), Facets of Fieldwork - Essay in Honor of Jürg Wassmann (pp. 57-68). Heidelberg: Universitätsverlag Winter.

    Abstract

    This paper deals with the dramatic environmental, social and cultural changes on the Trobriand Islands which I experienced during 16 long- and short-term fieldtrips from 1982 to 2012. I first report on the climate change I experienced there over the years and provide a survey about the demographic changes on the Trobriand Islands – highlighting the situation in Tauwema, my village of residence on Kaile’una Island. Finally I report on the social and cultural impact these dramatic changes have for the Trobriand Islanders and their culture.
  • Senft, G. (2009). Sind die emotionalen Gesichtsausdrücke des Menschen in allen Kulturen gleich? In Max Planck Society (Ed.), Max-Planck-Gesellschaft Jahrbuch 2008/09 Tätigkeitsberichte und Publikationen (DVD) (pp. 1-4). München: Max Planck Society for the Advancement of Science.

    Abstract

    This paper presents a project which tests the hypothesis of the universality of facial expressions of emotions crossculturally and crosslinguistically. First results are presented which contradict the hypothesis.
  • Senft, G. (1998). Zeichenkonzeptionen in Ozeanien. In R. Posner, T. Robering, & T.. Sebeok (Eds.), Semiotics: A handbook on the sign-theoretic foundations of nature and culture (Vol. 2) (pp. 1971-1976). Berlin: de Gruyter.
  • Senft, G. (2009). Trobriand Islanders' forms of ritual communication. In G. Senft, & E. B. Basso (Eds.), Ritual communication (pp. 81-101). Oxford: Berg.
  • Seuren, P. A. M. (2002). Pseudoarguments and pseudocomplements. In B. Nevin (Ed.), The legacy of Zellig Harris: Language and information into the 21st Century: 1 Philosophy of Science, Syntax, and Semantics (pp. 179-206). Amsterdam: John Benjamins.
  • Seuren, P. A. M. (2002). Clitic clusters in French and Italian. In H. Jacobs, & L. Wetzels (Eds.), Liber Amicorum Bernard Bichakjian (pp. 217-233). Maastricht: Shaker.

Share this page