Publications

Displaying 301 - 400 of 492
  • Mickan, A., Schiefke, M., & Anatol, S. (2014). Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al. 2003. In M. Hilpert, & S. Flach (Eds.), Yearbook of the German Cognitive Linguistics Association (pp. 39-50). Berlin: Walter de Gruyter. doi:doi.org/10.1515/gcla-2014-0004.

    Abstract

    In this paper, we present two attempts to replicate a widely-cited but never fully published experiment in which German and Spanish speakers were asked to associate adjectives with nouns of masculine and feminine grammati-cal gender (Boroditsky et al. 2003). The researchers claim that speakers associ-ated more stereotypically female adjectives with grammatically feminine nouns and more stereotypically male adjectives with grammatically masculine nouns. We were not able to replicate the results either in a word association task or in an analogous primed lexical decision task. This suggests that the results of the original experiment were either an artifact of some non-documented aspect of the experimental procedure or a statistical fluke. The question whether speakers assign sex-based interpretations to grammatical gender categories at all cannot be answered definitively, as the results in the published literature vary consider-ably. However, our experiments show that if such an effect exists, it is not strong enough to be measured indirectly via the priming of adjectives by nouns.
  • Micklos, A. (2014). The nature of language in interaction. In E. Cartmill, S. Roberts, H. Lyn, & H. Cornish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference.
  • Mishra, R. K., Olivers, C. N. L., & Huettig, F. (2013). Spoken language and the decision to move the eyes: To what extent are language-mediated eye movements automatic? In V. S. C. Pammi, & N. Srinivasan (Eds.), Progress in Brain Research: Decision making: Neural and behavioural approaches (pp. 135-149). New York: Elsevier.

    Abstract

    Recent eye-tracking research has revealed that spoken language can guide eye gaze very rapidly (and closely time-locked to the unfolding speech) toward referents in the visual world. We discuss whether, and to what extent, such language-mediated eye movements are automatic rather than subject to conscious and controlled decision-making. We consider whether language-mediated eye movements adhere to four main criteria of automatic behavior, namely, whether they are fast and efficient, unintentional, unconscious, and overlearned (i.e., arrived at through extensive practice). Current evidence indicates that language-driven oculomotor behavior is fast but not necessarily always efficient. It seems largely unintentional though there is also some evidence that participants can actively use the information in working memory to avoid distraction in search. Language-mediated eye movements appear to be for the most part unconscious and have all the hallmarks of an overlearned behavior. These data are suggestive of automatic mechanisms linking language to potentially referred-to visual objects, but more comprehensive and rigorous testing of this hypothesis is needed.
  • Mizera, P., Pollak, P., Kolman, A., & Ernestus, M. (2014). Impact of irregular pronunciation on phonetic segmentation of Nijmegen corpus of Casual Czech. In P. Sojka, A. Horák, I. Kopecek, & K. Pala (Eds.), Text, Speech and Dialogue: 17th International Conference, TSD 2014, Brno, Czech Republic, September 8-12, 2014. Proceedings (pp. 499-506). Heidelberg: Springer.

    Abstract

    This paper describes the pilot study of phonetic segmentation applied to Nijmegen Corpus of Casual Czech (NCCCz). This corpus contains informal speech of strong spontaneous nature which influences the character of produced speech at various levels. This work is the part of wider research related to the analysis of pronunciation reduction in such informal speech. We present the analysis of the accuracy of phonetic segmentation when canonical or reduced pronunciation is used. The achieved accuracy of realized phonetic segmentation provides information about general accuracy of proper acoustic modelling which is supposed to be applied in spontaneous speech recognition. As a byproduct of presented spontaneous speech segmentation, this paper also describes the created lexicon with canonical pronunciations of words in NCCCz, a tool supporting pronunciation check of lexicon items, and finally also a minidatabase of selected utterances from NCCCz manually labelled on phonetic level suitable for evaluation purposes
  • Moscoso del Prado Martín, F. (2003). Paradigmatic structures in morphological processing: Computational and cross-linguistic experimental studies. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.58929.
  • Moscoso del Prado Martín, F., & Baayen, R. H. (2003). Using the structure found in time: Building real-scale orthographic and phonetic representations by accumulation of expectations. In H. Bowman, & C. Labiouse (Eds.), Connectionist Models of Cognition, Perception and Emotion: Proceedings of the Eighth Neural Computation and Psychology Workshop (pp. 263-272). Singapore: World Scientific.
  • Moulin, C. A., Souchay, C., Bradley, R., Buchanan, S., Karadöller, D. Z., & Akan, M. (2014). Déjà vu in older adults. In B. L. Schwartz, & A. S. Brown (Eds.), Tip-of-the-tongue states and related phenomena (pp. 281-304). Cambridge: Cambridge University Press.
  • Mulder, K. (2013). Family and neighbourhood relations in the mental lexicon: A cross-language perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    We lezen en horen dagelijks duizenden woorden zonder dat het ons enige moeite lijkt te kosten. Toch speelt zich in ons brein ondertussen een complex mentaal proces af, waarbij tal van andere woorden dan het aangeboden woord, ook actief worden. Dit gebeurt met name wanneer die andere woorden overeenkomen met de feitelijk aangeboden woorden in spelling, uitspraak of betekenis. Deze activatie als gevolg van gelijkenis strekt zich zelfs uit tot andere talen: ook daarin worden gelijkende woorden actief. Waar liggen de grenzen van dit activatieproces? Activeer je bij het verwerken van het Engelse woord 'steam' ook het Nederlandse woord 'stram'(een zogenaamd 'buurwoord)? En activeer je bij 'clock' zowel 'clockwork' als 'klokhuis' ( twee morfolologische familieleden uit verschillende talen)? Kimberley Mulder onderzocht hoe het leesproces van Nederlands-Engelse tweetaligen wordt beïnvloed door zulke relaties. In meerdere experimentele studies vond ze dat tweetaligen niet alleen morfologische familieleden en orthografische buren activeren uit de taal waarin ze op dat moment lezen, maar ook uit de andere taal die ze kennen. Het lezen van een woord beperkt zich dus geenszins tot wat je eigenlijk ziet, maar activeert een heel netwerk van woorden in je brein.

    Additional information

    full text via Radboud Repository
  • Muysken, P., Hammarström, H., Birchall, J., Danielsen, S., Eriksen, L., Galucio, A. V., Van Gijn, R., Van de Kerke, S., Kolipakam, V., Krasnoukhova, O., Müller, N., & O'Connor, L. (2014). The languages of South America: Deep families, areal relationships, and language contact. In P. Muysken, & L. O'Connor (Eds.), Language contact in South America (pp. 299-323). Cambridge: Cambridge University Press.
  • Neijt, A., Schreuder, R., & Baayen, R. H. (2003). Verpleegsters, ambassadrices, and masseuses: Stratum differences in the comprehension of Dutch words with feminine agent suffixes. In L. Cornips, & P. Fikkert (Eds.), Linguistics in the Netherlands 2003. (pp. 117-127). Amsterdam: Benjamins.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Nordhoff, S., & Hammarström, H. (2014). Archiving grammatical descriptions. In P. K. Austin (Ed.), Language Documentation and Description. Vol. 12 (pp. 164-186). London: SOAS.
  • Norris, D., McQueen, J. M., & Cutler, A. (1994). Competition and segmentation in spoken word recognition. In Proceedings of the Third International Conference on Spoken Language Processing: Vol. 1 (pp. 401-404). Yokohama: PACIFICO.

    Abstract

    This paper describes recent experimental evidence which shows that models of spoken word recognition must incorporate both inhibition between competing lexical candidates and a sensitivity to metrical cues to lexical segmentation. A new version of the Shortlist [1][2] model incorporating the Metrical Segmentation Strategy [3] provides a detailed simulation of the data.
  • O'Connor, L., & Kolipakam, V. (2014). Human migrations, dispersals, and contacts in South America. In L. O'Connor, & P. Muysken (Eds.), The native languages of South America: Origins, development, typology (pp. 29-55). Cambridge: Cambridge University Press.
  • Oostdijk, N., & Broeder, D. (2003). The Spoken Dutch Corpus and its exploitation environment. In A. Abeille, S. Hansen-Schirra, & H. Uszkoreit (Eds.), Proceedings of the 4th International Workshop on linguistically interpreted corpora (LINC-03) (pp. 93-101).
  • Ortega, G. (2013). Acquisition of a signed phonological system by hearing adults: The Role of sign structure and iconcity. PhD Thesis, University College London, London.

    Abstract

    The phonological system of a sign language comprises meaningless sub-lexical units that define the structure of a sign. A number of studies have examined how learners of a sign language as a first language (L1) acquire these components. However, little is understood about the mechanism by which hearing adults develop visual phonological categories when learning a sign language as a second language (L2). Developmental studies have shown that sign complexity and iconicity, the clear mapping between the form of a sign and its referent, shape in different ways the order of emergence of a visual phonology. The aim of the present dissertation was to investigate how these two factors affect the development of a visual phonology in hearing adults learning a sign language as L2. The empirical data gathered in this dissertation confirms that sign structure and iconicity are important factors that determine L2 phonological development. Non-signers perform better at discriminating the contrastive features of phonologically simple signs than signs with multiple elements. Handshape was the parameter most difficult to learn, followed by movement, then orientation and finally location which is the same order of acquisition reported in L1 sign acquisition. In addition, the ability to access the iconic properties of signs had a detrimental effect in phonological development because iconic signs were consistently articulated less accurately than arbitrary signs. Participants tended to retain the iconic elements of signs but disregarded their exact phonetic structure. Further, non-signers appeared to process iconic signs as iconic gestures at least at the early stages of sign language acquisition. The empirical data presented in this dissertation suggest that non-signers exploit their gestural system as scaffolding of the new manual linguistic system and that sign L2 phonological development is strongly influenced by the structural complexity of a sign and its degree of iconicity.
  • Ortega, G., & Ozyurek, A. (2013). Gesture-sign interface in hearing non-signers' first exposure to sign. In Proceedings of the Tilburg Gesture Research Meeting [TiGeR 2013].

    Abstract

    Natural sign languages and gestures are complex communicative systems that allow the incorporation of features of a referent into their structure. They differ, however, in that signs are more conventionalised because they consist of meaningless phonological parameters. There is some evidence that despite non-signers finding iconic signs more memorable they can have more difficulty at articulating their exact phonological components. In the present study, hearing non-signers took part in a sign repetition task in which they had to imitate as accurately as possible a set of iconic and arbitrary signs. Their renditions showed that iconic signs were articulated significantly less accurately than arbitrary signs. Participants were recalled six months later to take part in a sign generation task. In this task, participants were shown the English translation of the iconic signs they imitated six months prior. For each word, participants were asked to generate a sign (i.e., an iconic gesture). The handshapes produced in the sign repetition and sign generation tasks were compared to detect instances in which both renditions presented the same configuration. There was a significant correlation between articulation accuracy in the sign repetition task and handshape overlap. These results suggest some form of gestural interference in the production of iconic signs by hearing non-signers. We also suggest that in some instances non-signers may deploy their own conventionalised gesture when producing some iconic signs. These findings are interpreted as evidence that non-signers process iconic signs as gestures and that in production, only when sign and gesture have overlapping features will they be capable of producing the phonological components of signs accurately.
  • Ortega, G., Sumer, B., & Ozyurek, A. (2014). Type of iconicity matters: Bias for action-based signs in sign language acquisition. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1114-1119). Austin, Tx: Cognitive Science Society.

    Abstract

    Early studies investigating sign language acquisition claimed
    that signs whose structures are motivated by the form of their
    referent (iconic) are not favoured in language development.
    However, recent work has shown that the first signs in deaf
    children’s lexicon are iconic. In this paper we go a step
    further and ask whether different types of iconicity modulate
    learning sign-referent links. Results from a picture description
    task indicate that children and adults used signs with two
    possible variants differentially. While children signing to
    adults favoured variants that map onto actions associated with
    a referent (action signs), adults signing to another adult
    produced variants that map onto objects’ perceptual features
    (perceptual signs). Parents interacting with children used
    more action variants than signers in adult-adult interactions.
    These results are in line with claims that language
    development is tightly linked to motor experience and that
    iconicity can be a communicative strategy in parental input.
  • Osswald, R., & Van Valin Jr., R. D. (2013). FrameNet, frame structure and the syntax-semantics interface. In T. Gamerschlag, D. Gerland, R. Osswald, & W. Petersen (Eds.), Frames and concept types: Applications in language and philosophy. Heidelberg: Springer.
  • Otake, T., & Cutler, A. (2003). Evidence against "units of perception". In S. Shohov (Ed.), Advances in psychology research (pp. 57-82). Hauppauge, NY: Nova Science.
  • Ouni, S., Cohen, M. M., Young, K., & Jesse, A. (2003). Internationalization of a talking head. In M. Sole, D. Recasens, & J. Romero (Eds.), Proceedings of 15th International Congress of Phonetics Sciences (pp. 2569-2572). Barcelona: Casual Productions.

    Abstract

    In this paper we describe a general scheme for internationalization of our talking head, Baldi, to speak other languages. We describe the modular structure of the auditory/visual synthesis software. As an example, we have created a synthetic Arabic talker, which is evaluated using a noisy word recognition task comparing this talker with a natural one.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A. (1994). How children talk about a conversation. In K. Beals, J. Denton, R. Knippen, L. Melnar, H. Suzuki, & E. Zeinfeld (Eds.), Papers from the Thirtieth Regional Meeting of the Chicago Linguistic Society: Main Session (pp. 309-319). Chicago, Ill: Chicago Linguistic Society.
  • Ozyurek, A. (1994). How children talk about conversations: Development of roles and voices. In E. V. Clark (Ed.), Proceedings of the Twenty-Sixth Annual Child Language Research Forum (pp. 197-206). Stanford: CSLI Publications.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the kinematics of pointing gestures. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1127-1132). Austin, TX: Cognitive Science Society.

    Abstract

    In everyday communication, people not only use speech but
    also hand gestures to convey information. One intriguing
    question in gesture research has been why gestures take the
    specific form they do. Previous research has identified the
    speaker-gesturer’s communicative intent as one factor
    shaping the form of iconic gestures. Here we investigate
    whether communicative intent also shapes the form of
    pointing gestures. In an experimental setting, twenty-four
    participants produced pointing gestures identifying a referent
    for an addressee. The communicative intent of the speakergesturer
    was manipulated by varying the informativeness of
    the pointing gesture. A second independent variable was the
    presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.
  • Peeters, D., Azar, Z., & Ozyurek, A. (2014). The interplay between joint attention, physical proximity, and pointing gesture in demonstrative choice. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1144-1149). Austin, Tx: Cognitive Science Society.
  • Perlman, M., Clark, N., & Tanner, J. (2014). Iconicity and ape gesture. In E. A. Cartmill, S. G. Roberts, H. Lyn, & H. Cornish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 236-243). New Jersey: World Scientific.

    Abstract

    Iconic gestures are hypothesized to be c rucial to the evolution of language. Yet the important question of whether apes produce iconic gestures is the subject of considerable debate. This paper presents the current state of research on iconicity in ape gesture. In particular, it describes some of the empirical evidence suggesting that apes produce three different kinds of iconic gestures; it compares the iconicity hypothesis to other major hypotheses of ape gesture; and finally, it offers some directions for future ape gesture research
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2013). Distinct patterns of brain activity characterize lexical activation and competition in speech production [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 106.

    Abstract

    A fundamental ability of speakers is to
    quickly retrieve words from long-term memory. According to a prominent theory, concepts activate multiple associated words, which enter into competition for selection. Previous electrophysiological studies have provided evidence for the activation of multiple alternative words, but did not identify brain responses refl ecting competition. We report a magnetoencephalography study examining the timing and neural substrates of lexical activation and competition. The degree of activation of competing words was
    manipulated by presenting pictures (e.g., dog) simultaneously with distractor
    words. The distractors were semantically related to the picture name (cat), unrelated (pin), or identical (dog). Semantic distractors are stronger competitors to the picture name, because they receive additional activation from the picture, whereas unrelated distractors do not. Picture naming times were longer with semantic than with unrelated and identical distractors. The patterns of phase-locked and non-phase-locked activity were distinct
    but temporally overlapping. Phase-locked activity in left middle temporal
    gyrus, peaking at 400 ms, was larger on unrelated than semantic and identical trials, suggesting differential effort in processing the alternative words activated by the picture-word stimuli. Non-phase-locked activity in the 4-10 Hz range between 400-650 ms in left superior frontal gyrus was larger on semantic than unrelated and identical trials, suggesting different
    degrees of effort in resolving the competition among the alternatives
    words, as refl ected in the naming times. These findings characterize distinct
    patterns of brain activity associated with lexical activation and competition
    respectively, and their temporal relation, supporting the theory that words are selected by competition.
  • Piai, V. (2014). Choosing our words: Lexical competition and the involvement of attention in spoken word production. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Plomp, R., & Levelt, W. J. M. (1966). Perception of tonal consonance. In M. A. Bouman (Ed.), Studies in Perception - dedicated to M.A. Bouman (pp. 105-118). Soesterberg: Institute for Perception RVO-TNO.
  • Poellmann, K. (2013). The many ways listeners adapt to reductions in casual speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Puccini, D. (2013). The use of deictic versus representational gestures in infancy. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ravignani, A., Gingras, B., Asano, R., Sonnweber, R., Matellan, V., & Fitch, W. T. (2013). The evolution of rhythmic cognition: New perspectives and technologies in comparative research. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of the Cognitive Science Society (pp. 1199-1204). Austin,TX: Cognitive Science Society.

    Abstract

    Music is a pervasive phenomenon in human culture, and musical
    rhythm is virtually present in all musical traditions. Research
    on the evolution and cognitive underpinnings of rhythm
    can benefit from a number of approaches. We outline key concepts
    and definitions, allowing fine-grained analysis of rhythmic
    cognition in experimental studies. We advocate comparative
    animal research as a useful approach to answer questions
    about human music cognition and review experimental evidence
    from different species. Finally, we suggest future directions
    for research on the cognitive basis of rhythm. Apart from
    research in semi-natural setups, possibly allowed by “drum set
    for chimpanzees” prototypes presented here for the first time,
    mathematical modeling and systematic use of circular statistics
    may allow promising advances.
  • Ravignani, A., Bowling, D., & Kirby, S. (2014). The psychology of biological clocks: A new framework for the evolution of rhythm. In E. A. Cartmill, S. G. Roberts, & H. Lyn (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 262-269). Singapore: World Scientific.
  • Reesink, G. (2014). Topic management and clause combination in the Papuan language Usan. In R. Van Gijn, J. Hammond, D. Matic, S. van Putten, & A.-V. Galucio (Eds.), Information Structure and Reference Tracking in Complex Sentences. (pp. 231-262). Amsterdam: John Benjamins.

    Abstract

    This chapter describes topic management in the Papuan language Usan. The notion of ‘topic’ is defined by its pre-theoretical meaning ‘what someone’s speech is about’. This notion cannot be restricted to simple clausal or sentential constructions, but requires the wider context of long stretches of natural text. The tracking of a topic is examined in its relationship to clause combining mechanisms. Coordinating clause chaining with its switch reference mechanism is contrasted with subordinating strategies called ‘domain-creating’ constructions. These different strategies are identified by language-specific signals, such as intonation and morphosyntactic cues like nominalizations and scope of negation and other modalities.
  • Reifegerste, J. (2014). Morphological processing in younger and older people: Evidence for flexible dual-route access. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Roberts, S. G., Dediu, D., & Levinson, S. C. (2014). Detecting differences between the languages of Neandertals and modern humans. In E. A. Cartmill, S. G. Roberts, H. Lyn, & H. Cornish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 501-502). Singapore: World Scientific.

    Abstract

    Dediu and Levinson (2013) argue that Neandertals had essentially modern language and speech, and that they were in genetic contact with the ancestors of modern humans during our dispersal out of Africa. This raises the possibility of cultural and linguistic contact between the two human lineages. If such contact did occur, then it might have influenced the cultural evolution of the languages. Since the genetic traces of contact with Neandertals are limited to the populations outside of Africa, Dediu & Levinson predict that there may be structural differences between the present-day languages derived from languages in contact with Neanderthals, and those derived from languages that were not influenced by such contact. Since the signature of such deep contact might reside in patterns of features, they suggested that machine learning methods may be able to detect these differences. This paper attempts to test this hypothesis and to estimate particular linguistic features that are potential candidates for carrying a signature of Neandertal languages.
  • Roberts, L. (2013). Discourse processing. In P. Robinson (Ed.), The Routledge encyclopedia of second language acquisition (pp. 190-194). New York: Routledge.
  • Roberts, S. G. (2013). A Bottom-up approach to the cultural evolution of bilingualism. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1229-1234). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0236/index.html.

    Abstract

    The relationship between individual cognition and cultural phenomena at the society level can be transformed by cultural transmission (Kirby, Dowman, & Griffiths, 2007). Top-down models of this process have typically assumed that individuals only adopt a single linguistic trait. Recent extensions include ‘bilingual’ agents, able to adopt multiple linguistic traits (Burkett & Griffiths, 2010). However, bilingualism is more than variation within an individual: it involves the conditional use of variation with different interlocutors. That is, bilingualism is a property of a population that emerges from use. A bottom-up simulation is presented where learners are sensitive to the identity of other speakers. The simulation reveals that dynamic social structures are a key factor for the evolution of bilingualism in a population, a feature that was abstracted away in the top-down models. Top-down and bottom-up approaches may lead to different answers, but can work together to reveal and explore important features of the cultural transmission process.
  • Roberts, S. G., & De Vos, C. (2014). Gene-culture coevolution of a linguistic system in two modalities. In B. De Boer, & T. Verhoef (Eds.), Proceedings of Evolang X, Workshop on Signals, Speech, and Signs (pp. 23-27).

    Abstract

    Complex communication can take place in a range of modalities such as auditory, visual, and tactile modalities. In a very general way, the modality that individuals use is constrained by their biological biases (humans cannot use magnetic fields directly to communicate to each other). The majority of natural languages have a large audible component. However, since humans can learn sign languages just as easily, it’s not clear to what extent the prevalence of spoken languages is due to biological biases, the social environment or cultural inheritance. This paper suggests that we can explore the relative contribution of these factors by modelling the spontaneous emergence of sign languages that are shared by the deaf and hearing members of relatively isolated communities. Such shared signing communities have arisen in enclaves around the world and may provide useful insights by demonstrating how languages evolve as the deaf proportion of its members has strong biases towards the visual language modality. In this paper we describe a model of cultural evolution in two modalities, combining aspects that are thought to impact the emergence of sign languages in a more general evolutionary framework. The model can be used to explore hypotheses about how sign languages emerge.
  • Roberts, S. G. (2014). Monolingual Biases in Simulations of Cultural Transmission. In V. Dignum, & F. Dignum (Eds.), Perspectives on Culture and Agent-based Simulations (pp. 111-125). Cham: Springer. doi:10.1007/978-3-319-01952-9_7.

    Abstract

    Recent research suggests that the evolution of language is affected by the inductive biases of its learners. I suggest that there is an implicit assumption that one of these biases is to expect a single linguistic system in the input. Given the prevalence of bilingual cultures, this may not be a valid abstraction. This is illustrated by demonstrating that the ‘minimal naming game’ model, in which a shared lexicon evolves in a population of agents, includes an implicit mutual exclusivity bias. Since recent research suggests that children raised in bilingual cultures do not exhibit mutual exclusivity, the individual learning algorithm of the agents is not as abstract as it appears to be. A modification of this model demonstrates that communicative success can be achieved without mutual exclusivity. It is concluded that complex cultural phenomena, such as bilingualism, do not necessarily result from complex individual learning mechanisms. Rather, the cultural process itself can bring about this complexity.
  • Roberts, S. G., & Quillinan, J. (2014). The Chimp Challenge: Working memory in chimps and humans. In L. McCrohon, B. Thompson, T. Verhoef, & H. Yamauchi (Eds.), The Past, Present and Future of Language Evolution Research: Student volume of the 9th International Conference on the Evolution of Language (pp. 31-39). Tokyo: EvoLang9 Organising Committee.

    Abstract

    Matsuzawa (2012) presented work at Evolang demonstrating the working memory abilities of chimpanzees. (Inoue & Matsuzawa, 2007) found that chimpanzees can correctly remember the location of 9 randomly arranged numerals displayed for 210ms - shorter than an average human eye saccade. Humans, however, perform poorly at this task. Matsuzawa suggests a semantic link hypothesis: while chimps have good visual, eidetic memory, humans are good at symbolic associations. The extra information in the semantic, linguistic links that humans possess increase the load on working memory and make this task difficult for them. We were interested to see if a wider search could find humans that matched the performance of the chimpanzees. We created an online version of the experiment and challenged people to play. We also attempted to run a non-semantic version of the task to see if this made the task easier. We found that, while humans can perform better than Inoue and Matsuzawa (2007) suggest, chimpanzees can perform better still. We also found no evidence to support the semantic link hypothesis.
  • Roberts, L. (2013). Sentence processing in bilinguals. In R. Van Gompel (Ed.), Sentence processing. London: Psychology Press.
  • Roberts, S. G., Thompson, B., & Smith, K. (2014). Social interaction influences the evolution of cognitive biases for language. In E. A. Cartmill, S. G. Roberts, & H. Lyn (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 278-285). Singapore: World Scientific. doi:0.1142/9789814603638_0036.

    Abstract

    Models of cultural evolution demonstrate that the link between individual biases and population- level phenomena can be obscured by the process of cultural transmission (Kirby, Dowman, & Griffiths, 2007). However, recent extensions to these models predict that linguistic diversity will not emerge and that learners should evolve to expect little linguistic variation in their input (Smith & Thompson, 2012). We demonstrate that this result derives from assumptions that privilege certain kinds of social interaction by exploring a range of alternative social models. We find several evolutionary routes to linguistic diversity, and show that social interaction not only influences the kinds of biases which could evolve to support language, but also the effects those biases have on a linguistic system. Given the same starting situation, the evolution of biases for language learning and the distribution of linguistic variation are affected by the kinds of social interaction that a population privileges.
  • Roelofs, A. (2003). Modeling the relation between the production and recognition of spoken word forms. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 115-158). Berlin: Mouton de Gruyter.
  • Rojas-Berscia, L. M. (2014). A Heritage Reference Grammar of Selk’nam. Master Thesis, Radboud University, Nijmegen.
  • Rommers, J. (2013). Seeing what's next: Processing and anticipating language referring to objects. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Rossano, F. (2013). Gaze in conversation. In J. Sidnell, & T. Stivers (Eds.), The handbook of conversation analysis (pp. 308-329). Malden, MA: Wiley-Blackwell. doi:10.1002/9781118325001.ch15.

    Abstract

    This chapter contains sections titled: Introduction Background: The Gaze “Machinery” Gaze “Machinery” in Social Interaction Future Directions
  • Rossi, G. (2014). When do people not use language to make requests? In P. Drew, & E. Couper-Kuhlen (Eds.), Requesting in social interaction (pp. 301-332). Amsterdam: John Benjamins.

    Abstract

    In everyday joint activities (e.g. playing cards, preparing potatoes, collecting empty plates), participants often request others to pass, move or otherwise deploy objects. In order to get these objects to or from the requestee, requesters need to manipulate them, for example by holding them out, reaching for them, or placing them somewhere. As they perform these manual actions, requesters may or may not accompany them with language (e.g. Take this potato and cut it or Pass me your plate). This study shows that adding or omitting language in the design of a request is influenced in the first place by a criterion of recognition. When the requested action is projectable from the advancement of an activity, presenting a relevant object to the requestee is enough for them to understand what to do; when, on the other hand, the requested action is occasioned by a contingent development of the activity, requesters use language to specify what the requestee should do. This criterion operates alongside a perceptual criterion, to do with the affordances of the visual and auditory modality. When the requested action is projectable but the requestee is not visually attending to the requester’s manual behaviour, the requester can use just enough language to attract the requestee’s attention and secure immediate recipiency. This study contributes to a line of research concerned with the organisation of verbal and nonverbal resources for requesting. Focussing on situations in which language is not – or only minimally – used, it demonstrates the role played by visible bodily behaviour and by the structure of everyday activities in the formation and understanding of requests.
  • Rowland, C. F., Noble, C. H., & Chan, A. (2014). Competition all the way down: How children learn word order cues to sentence meaning. In B. MacWhinney, A. Malchukov, & E. Moravcsik (Eds.), Competing Motivations in Grammar and Usage (pp. 125-143). Oxford: Oxford University Press.

    Abstract

    Most work on competing cues in language acquisition has focussed on what happens when cues compete within a certain construction. There has been far less work on what happens when constructions themselves compete. The aim of the present chapter was to explore how the acquisition mechanism copes when constructions compete in a language. We present three experimental studies, all of which focus on the acquisition of the syntactic function of word order as a marker of the Theme-Recipient relation in ditransitives (form-meaning mapping). In Study 1 we investigated how quickly English children acquire form-meaning mappings when there are two competing structures in the language. We demonstrated that English speaking 4-year- olds, but not 3-year-olds, correctly interpreted both preposition al and double object datives, assigning Theme and Recipient participant roles on the basis of word order cues. There was no advantage for the double object dative despite its greater frequency in child directed speech. In Study 2 we looked at acquisition in a language which has no dative alternation –Welsh–to investigate how quickly children acquire form-meaning mapping when there is no competing structure. We demonstrated that Welsh children (Study 2) acquired the prepositional dative at age 3 years, which was much earlier than English children. Finally, in Study 3 we examined bei2 (give) ditransitives in Cantonese, to investigate what happens when there is no dative alternation (as in Welsh), but when the child hears alternative, and possibly competing, word orders in the input. Like the English 3-year-olds, the Cantonese 3-year-olds had not yet acquired the word order marking constraints of bei2 ditransitives. We conclude that there is not only competition between cues but competition between constructions in language acquisition. We suggest an extension to the competition model (Bates & MacWhinney, 1982) whereby generalisations take place across constructions as easily as they take place within constructions, whenever there are salient similarities to form the basis of the generalisation.
  • Rubio-Fernández, P., Breheny, R., & Lee, M. W. (2003). Context-independent information in concepts: An investigation of the notion of ‘core features’. In Proceedings of the 25th Annual Conference of the Cognitive Science Society (CogSci 2003). Austin, TX: Cognitive Science Society.
  • De Ruiter, J. P. (2003). The function of hand gesture in spoken conversation. In M. Bickenbach, A. Klappert, & H. Pompe (Eds.), Manus Loquens: Medium der Geste, Gesten der Medien (pp. 338-347). Cologne: DuMont.
  • De Ruiter, J. P. (2003). A quantitative model of Störung. In A. Kümmel, & E. Schüttpelz (Eds.), Signale der Störung (pp. 67-81). München: Wilhelm Fink Verlag.
  • De Ruiter, J. P. (1998). Gesture and speech production. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057686.
  • Rumsey, A., San Roque, L., & Schieffelin, B. (2013). The acquisition of ergative marking in Kaluli, Ku Waru and Duna (Trans New Guinea). In E. L. Bavin, & S. Stoll (Eds.), The acquisition of ergativity (pp. 133-182). Amsterdam: Benjamins.

    Abstract

    In this chapter we present material on the acquisition of ergative marking on noun phrases in three languages of Papua New Guinea: Kaluli, Ku Waru, and Duna. The expression of ergativity in all the languages is broadly similar, but sensitive to language-specific features, and this pattern of similarity and difference is reflected in the available acquisition data. Children acquire adult-like ergative marking at about the same pace, reaching similar levels of mastery by 3;00 despite considerable differences in morphological complexity of ergative marking among the languages. What may be more important – as a factor in accounting for the relative uniformity of acquisition in this respect – are the similarities in patterns of interactional scaffolding that emerge from a comparison of the three cases.
  • Sauppe, S., Norcliffe, E., Konopka, A. E., Van Valin Jr., R. D., & Levinson, S. C. (2013). Dependencies first: Eye tracking evidence from sentence production in Tagalog. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1265-1270). Austin, TX: Cognitive Science Society.

    Abstract

    We investigated the time course of sentence formulation in Tagalog, a verb-initial language in which the verb obligatorily agrees with one of its arguments. Eye-tracked participants described pictures of transitive events. Fixations to the two characters in the events were compared across sentences differing in agreement marking and post-verbal word order. Fixation patterns show evidence for two temporally dissociated phases in Tagalog sentence production. The first, driven by verb agreement, involves early linking of concepts to syntactic functions; the second, driven by word order, involves incremental lexical encoding of these concepts. These results suggest that even the earliest stages of sentence formulation may be guided by a language's grammatical structure.
  • Scharenborg, O., & Janse, E. (2013). Changes in the role of intensity as a cue for fricative categorisation. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 3147-3151).

    Abstract

    Older listeners with high-frequency hearing loss rely more on intensity for categorisation of /s/ than normal-hearing older listeners. This study addresses the question whether this increased reliance comes about immediately when the need
    arises, i.e., in the face of a spectrally-degraded signal. A phonetic categorisation task was carried out using intensitymodulated fricatives in a clean and a low-pass filtered condition with two younger and two older listener groups.
    When high-frequency information was removed from the speech signal, younger listeners started using intensity as a cue. The older adults on the other hand, when presented with the low-pass filtered speech, did not rely on intensity differences for fricative identification. These results suggest that the reliance on intensity shown by the older hearingimpaired adults may have been acquired only gradually with
    longer exposure to a degraded speech signal.
  • Scharenborg, O., McQueen, J. M., Ten Bosch, L., & Norris, D. (2003). Modelling human speech recognition using automatic speech recognition paradigms in SpeM. In Proceedings of Eurospeech 2003 (pp. 2097-2100). Adelaide: Causal Productions.

    Abstract

    We have recently developed a new model of human speech recognition, based on automatic speech recognition techniques [1]. The present paper has two goals. First, we show that the new model performs well in the recognition of lexically ambiguous input. These demonstrations suggest that the model is able to operate in the same optimal way as human listeners. Second, we discuss how to relate the behaviour of a recogniser, designed to discover the optimum path through a word lattice, to data from human listening experiments. We argue that this requires a metric that combines both path-based and word-based measures of recognition performance. The combined metric varies continuously as the input speech signal unfolds over time.
  • Scharenborg, O., ten Bosch, L., & Boves, L. (2003). Recognising 'real-life' speech with SpeM: A speech-based computational model of human speech recognition. In Eurospeech 2003 (pp. 2285-2288).

    Abstract

    In this paper, we present a novel computational model of human speech recognition – called SpeM – based on the theory underlying Shortlist. We will show that SpeM, in combination with an automatic phone recogniser (APR), is able to simulate the human speech recognition process from the acoustic signal to the ultimate recognition of words. This joint model takes an acoustic speech file as input and calculates the activation flows of candidate words on the basis of the degree of fit of the candidate words with the input. Experiments showed that SpeM outperforms Shortlist on the recognition of ‘real-life’ input. Furthermore, SpeM performs only slightly worse than an off-the-shelf full-blown automatic speech recogniser in which all words are equally probable, while it provides a transparent computationally elegant paradigm for modelling word activations in human word recognition.
  • Schepens, J., Van der Slik, F., & Van Hout, R. (2013). The effect of linguistic distance across Indo-European mother tongues on learning Dutch as a second language. In L. Borin, & A. Saxena (Eds.), Approaches to measuring linguistic differences (pp. 199-230). Berlin: Mouton de Gruyter.
  • Schiller, N. O. (2003). Metrical stress in speech production: A time course study. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 451-454). Adelaide: Causal Productions.

    Abstract

    This study investigated the encoding of metrical information during speech production in Dutch. In Experiment 1, participants were asked to judge whether bisyllabic picture names had initial or final stress. Results showed significantly faster decision times for initially stressed targets (e.g., LEpel 'spoon') than for targets with final stress (e.g., liBEL 'dragon fly'; capital letters indicate stressed syllables) and revealed that the monitoring latencies are not a function of the picture naming or object recognition latencies to the same pictures. Experiments 2 and 3 replicated the outcome of the first experiment with bi- and trisyllabic picture names. These results demonstrate that metrical information of words is encoded rightward incrementally during phonological encoding in speech production. The results of these experiments are in line with Levelt's model of phonological encoding.
  • Schiller, N. O., & Meyer, A. S. (2003). Introduction to the relation between speech comprehension and production. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 1-8). Berlin: Mouton de Gruyter.
  • Schmidt, J., Janse, E., & Scharenborg, O. (2014). Age, hearing loss and the perception of affective utterances in conversational speech. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 1929-1933).

    Abstract

    This study investigates whether age and/or hearing loss influence the perception of the emotion dimensions arousal (calm vs. aroused) and valence (positive vs. negative attitude) in conversational speech fragments. Specifically, this study focuses on the relationship between participants' ratings of affective speech and acoustic parameters known to be associated with arousal and valence (mean F0, intensity, and articulation rate). Ten normal-hearing younger and ten older adults with varying hearing loss were tested on two rating tasks. Stimuli consisted of short sentences taken from a corpus of conversational affective speech. In both rating tasks, participants estimated the value of the emotion dimension at hand using a 5-point scale. For arousal, higher intensity was generally associated with higher arousal in both age groups. Compared to younger participants, older participants rated the utterances as less aroused, and showed a smaller effect of intensity on their arousal ratings. For valence, higher mean F0 was associated with more negative ratings in both age groups. Generally, age group differences in rating affective utterances may not relate to age group differences in hearing loss, but rather to other differences between the age groups, as older participants' rating patterns were not associated with their individual hearing loss.
  • Schmiedtová, B. (2003). The use of aspect in Czech L2. In D. Bittner, & N. Gagarina (Eds.), ZAS Papers in Linguistics (pp. 177-194). Berlin: Zentrum für Allgemeine Sprachwissenschaft.
  • Schmiedtová, B. (2003). Aspekt und Tempus im Deutschen und Tschechischen: Eine vergleichende Studie. In S. Höhne (Ed.), Germanistisches Jahrbuch Tschechien - Slowakei: Schwerpunkt Sprachwissenschaft (pp. 185-216). Praha: Lidové noviny.
  • Schoffelen, J.-M., & Gross, J. (2014). Studying dynamic neural interactions with MEG. In S. Supek, & C. J. Aine (Eds.), Magnetoencephalography: From signals to dynamic cortical networks (pp. 405-427). Berlin: Springer.
  • Schreuder, R., Burani, C., & Baayen, R. H. (2003). Parsing and semantic opacity. In E. M. Assink, & D. Sandra (Eds.), Reading complex words (pp. 159-189). Dordrecht: Kluwer.
  • Scott, K., Sakkalou, E., Ellis-Davies, K., Hilbrink, E., Hahn, U., & Gattis, M. (2013). Infant contributions to joint attention predict vocabulary development. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of the Cognitive Science Society (pp. 3384-3389). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0602/index.html.

    Abstract

    Joint attention has long been accepted as constituting a privileged circumstance in which word learning prospers. Consequently research has investigated the role that maternal responsiveness to infant attention plays in predicting language outcomes. However there has been a recent expansion in research implicating similar predictive effects from individual differences in infant behaviours. Emerging from the foundations of such work comes an interesting question: do the relative contributions of the mother and infant to joint attention episodes impact upon language learning? In an attempt to address this, two joint attention behaviours were assessed as predictors of vocabulary attainment (as measured by OCDI Production Scores). These predictors were: mothers encouraging attention to an object given their infant was already attending to an object (maternal follow-in); and infants looking to an object given their mothers encouragement of attention to an object (infant follow-in). In a sample of 14-month old children (N=36) we compared the predictive power of these maternal and infant follow-in variables on concurrent and later language performance. Results using Growth Curve Analysis provided evidence that while both maternal follow-in and infant follow-in variables contributed to production scores, infant follow-in was a stronger predictor. Consequently it does appear to matter whose final contribution establishes joint attention episodes. Infants who more often follow-in into their mothers’ encouragement of attention have larger, and faster growing vocabularies between 14 and 18-months of age.
  • Scott, S. K., McGettigan, C., & Eisner, F. (2013). The neural basis of links and dissociations between speech perception and production. In J. J. Bolhuis, & M. Everaert (Eds.), Birdsong, speech and language: Exploring the evolution of mind and brain (pp. 277-294). Cambridge, Mass: MIT Press.
  • Seidl, A., & Johnson, E. K. (2003). Position and vowel quality effects in infant's segmentation of vowel-initial words. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2233-2236). Adelaide: Causal Productions.
  • Seifart, F. (2003). Encoding shape: Formal means and semantic distinctions. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 57-59). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877660.

    Abstract

    The basic idea behind this task is to find out how languages encode basic shape distinctions such as dimensionality, axial geometry, relative size, etc. More specifically, we want to find out (i) which formal means are used cross linguistically to encode basic shape distinctions, and (ii) which are the semantic distinctions that are made in this domain. In languages with many shape-classifiers, these distinctions are encoded (at least partially) in classifiers. In other languages, positional verbs, descriptive modifiers, such as “flat”, “round”, or nouns such as “cube”, “ball”, etc. might be the preferred means. In this context, we also want to investigate what other “grammatical work” shapeencoding expressions possibly do in a given language, e.g. unitization of mass nouns, or anaphoric uses of shape-encoding classifiers, etc. This task further seeks to determine the role of shape-related parameters which underlie the design of objects in the semantics of the system under investigation.
  • Senft, G. (2003). Wosi Milamala: Weisen von Liebe und Tod auf den Trobriand Inseln. In I. Bobrowski (Ed.), Anabasis: Prace Ofiarowane Professor Krystynie Pisarkowej (pp. 289-295). Kraków: LEXIS.
  • Senft, G. (2003). Zur Bedeutung der Sprache für die Feldforschung. In B. Beer (Ed.), Methoden und Techniken der Feldforschung (pp. 55-70). Berlin: Reimer.
  • Senft, G. (1994). Darum gehet hin und lehret alle Völker: Mission, Kultur- und Sprachwandel am Beispiel der Trobriand-Insulaner von Papua-Neuguinea. In P. Stüben (Ed.), Seelenfischer: Mission, Stammesvölker und Ökologie (pp. 71-91). Gießen: Focus.
  • Senft, G. (1998). 'Noble Savages' and the 'Islands of Love': Trobriand Islanders in 'Popular Publications'. In J. Wassmann (Ed.), Pacific answers to Western hegemony: Cultural practices of identity construction (pp. 119-140). Oxford: Berg Publishers.
  • Senft, G. (2003). Ethnographic Methods. In W. Deutsch, T. Hermann, & G. Rickheit (Eds.), Psycholinguistik - Ein internationales Handbuch [Psycholinguistics - An International Handbook] (pp. 106-114). Berlin: Walter de Gruyter.
  • Senft, G. (2003). Ethnolinguistik. In B. Beer, & H. Fischer (Eds.), Ethnologie: Einführung und Überblick. 5. Aufl., Neufassung (pp. 255-270). Berlin: Reimer.
  • Senft, G. (2013). Ethnolinguistik. In B. Beer, & H. Fischer (Eds.), Ethnologie - Einführung und Überblick. (8. Auflage, pp. 271-286). Berlin: Reimer.
  • Senft, G. (2003). Reasoning in language. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 28-30). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877663.

    Abstract

    This project aims to investigate how speakers of various languages in indigenous cultures verbally reason about moral issues. The ways in which a solution for a moral problem is found, phrased and justified will be taken as the basis for researching reasoning processes that manifest themselves verbally in the speakers’ arguments put forward to solve a number of moral problems which will be presented to them in the form of unfinished story plots or scenarios that ask for a solution. The plots chosen attempt to present common problems in human society and human behaviour. They should function to elicit moral discussion and/or moral arguments in groups of consultants of at least three persons.
  • Senft, G. (1998). Zeichenkonzeptionen in Ozeanien. In R. Posner, T. Robering, & T.. Sebeok (Eds.), Semiotics: A handbook on the sign-theoretic foundations of nature and culture (Vol. 2) (pp. 1971-1976). Berlin: de Gruyter.
  • Senghas, A., Ozyurek, A., & Kita, S. (2003). Encoding motion events in an emerging sign language: From Nicaraguan gestures to Nicaraguan signs. In A. E. Baker, B. van den Bogaerde, & O. A. Crasborn (Eds.), Crosslinguistic perspectives in sign language research (pp. 119-130). Hamburg: Signum Press.
  • Senghas, A., Ozyurek, A., & Goldin-Meadow, S. (2013). Homesign as a way-station between co-speech gesture and sign language: The evolution of segmenting and sequencing. In R. Botha, & M. Everaert (Eds.), The evolutionary emergence of language: Evidence and inference (pp. 62-77). Oxford: Oxford University Press.
  • Seuren, P. A. M. (2003). Verb clusters and branching directionality in German and Dutch. In P. A. M. Seuren, & G. Kempen (Eds.), Verb Constructions in German and Dutch (pp. 247-296). Amsterdam: John Benjamins.
  • Seuren, P. A. M. (1994). Categorial presupposition. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 2) (pp. 477-478). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Accommodation and presupposition. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 1) (pp. 15-16). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Denotation in discourse semantics. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 2) (pp. 859-860). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Donkey sentences. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 2) (pp. 1059-1060). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Discourse domain. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 2) (pp. 964-965). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Discourse semantics. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 2) (pp. 982-993). Oxford: Pergamon Press.
  • Seuren, P. A. M. (2003). Logic, language and thought. In H. J. Ribeiro (Ed.), Encontro nacional de filosofia analítica. (pp. 259-276). Coimbra, Portugal: Faculdade de Letras.
  • Seuren, P. A. M. (1994). Factivity. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 3) (pp. 1205). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Function, set-theoretical. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 3) (pp. 1314). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Incrementation. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 3) (pp. 1646). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1966). Het probleem van de woorddefinitie. In Handelingen van het 29ste Nederlands Filologencongres (pp. 103-108).
  • Seuren, P. A. M. (1994). Lexical conditions. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 4) (pp. 2140-2141). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Existence predicate (discourse semantics). In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 3) (pp. 1190-1191). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Existential presupposition. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 3) (pp. 1191-1192). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Presupposition. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 6) (pp. 3311-3320). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Projection problem. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 6) (pp. 3358-3360). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). The computational lexicon: All lexical content is predicate. In Z. Yusoff (Ed.), Proceedings of the International Conference on Linguistic Applications 26-28 July 1994 (pp. 211-216). Penang: Universiti Sains Malaysia, Unit Terjemahan Melalui Komputer (UTMK).

Share this page