Publications

Displaying 801 - 900 of 1159
  • Richter, N., Tiddeman, B., & Haun, D. (2016). Social Preference in Preschoolers: Effects of Morphological Self-Similarity and Familiarity. PLoS One, 11(1): e0145443. doi:10.1371/journal.pone.0145443.

    Abstract

    Adults prefer to interact with others that are similar to themselves. Even slight facial self-resemblance can elicit trust towards strangers. Here we investigate if preschoolers at the age of 5 years already use facial self-resemblance when they make social judgments about others. We found that, in the absence of any additional knowledge about prospective peers, children preferred those who look subtly like themselves over complete strangers. Thus, subtle morphological similarities trigger social preferences well before adulthood.
  • Rietveld, T., Van Hout, R., & Ernestus, M. (2004). Pitfalls in corpus research. Computers and the Humanities, 38(4), 343-362. doi:10.1007/s10579-004-1919-1.

    Abstract

    This paper discusses some pitfalls in corpus research and suggests solutions on the basis of examples and computer simulations. We first address reliability problems in language transcriptions, agreement between transcribers, and how disagreements can be dealt with. We then show that the frequencies of occurrence obtained from a corpus cannot always be analyzed with the traditional X2 test, as corpus data are often not sequentially independent and unit independent. Next, we stress the relevance of the power of statistical tests, and the sizes of statistically significant effects. Finally, we point out that a t-test based on log odds often provides a better alternative to a X2 analysis based on frequency counts.
  • Ringersma, J., Zinn, C., & Kemps-Snijders, M. (2009). LEXUS & ViCoS From lexical to conceptual spaces. In 1st International Conference on Language Documentation and Conservation (ICLDC).

    Abstract

    LEXUS and ViCoS: from lexicon to conceptual spaces LEXUS is a web-based lexicon tool and the knowledge space software ViCoS is an extension of LEXUS, allowing users to create relations between objects in and across lexica. LEXUS and ViCoS are part of the Language Archiving Technology software, developed at the MPI for Psycholinguistics to archive and enrich linguistic resources collected in the framework of language documentation projects. LEXUS is of primary interest for language documentation, offering the possibility to not just create a digital dictionary, but additionally it allows the creation of multi-media encyclopedic lexica. ViCoS provides an interface between the lexical space and the ontological space. Its approach permits users to model a world of concepts and their interrelations based on categorization patterns made by the speech community. We describe the LEXUS and ViCoS functionalities using three cases from DoBeS language documentation projects: (1) Marquesan The Marquesan lexicon was initially created in Toolbox and imported into LEXUS using the Toolbox import functionality. The lexicon is enriched with multi-media to illustrate the meaning of the words in its cultural environment. Members of the speech community consider words as keys to access and describe relevant parts of their life and traditions. Their understanding of words is best described by the various associations they evoke rather than in terms of any formal theory of meaning. Using ViCoS a knowledge space of related concepts is being created. (2) Kola-Sámi Two lexica are being created in LEXUS: RuSaDic lexicon is a Russian-Kildin wordlist in which the entries are of relative limited structure and content. SaRuDiC is a more complex structured lexicon with much richer content, including multi-media fragments and derivations. Using ViCoS we have created a connection between the two lexica, so that speakers who are familiair with Russian and wish to revitalize their Kildin can enter the lexicon through the RuSaDic and from there approach the informative SaRuDic. Similary we will create relations from the two lexica to external open databases, like e.g. Álgu. (3) Beaver A speaker database including kinship relations has been created and the database has been imported into LEXUS. In the LEXUS views the relations for individual speakers are being displayed. Using ViCoS the relational information from the database will be extracted to form a kisnhip relation space with specific relation types, like e.g 'mother-of'. The whole set of relations from the database can be displayed in one ViCoS relation window, and zoom functionality is available.
  • Roberts, S. G., & Verhoef, T. (2016). Double-blind reviewing at EvoLang 11 reveals gender bias. Journal of Language Evolution, 1(2), 163-167. doi:10.1093/jole/lzw009.

    Abstract

    The impact of introducing double-blind reviewing in the most recent Evolution of Language conference is assessed. The ranking of papers is compared between EvoLang 11 (double-blind review) and EvoLang 9 and 10 (single-blind review). Main effects were found for first author gender by conference. The results mirror some findings in the literature on the effects of double-blind review, suggesting that it helps reduce a bias against female authors.

    Additional information

    SI.pdf
  • Roberts, L., Véronique, D., Nilsson, A., & Tellier, M. (Eds.). (2009). EUROSLA Yearbook 9. Amsterdam: John Benjamins.

    Abstract

    The annual conference of the European Second Language Association provides an opportunity for the presentation of second language research with a genuinely European flavour. The theoretical perspectives adopted are wide-ranging and may fall within traditions overlooked elsewhere. Moreover, the studies presented are largely multi-lingual and cross-cultural, as befits the make-up of modern-day Europe. At the same time, the work demonstrates sophisticated awareness of scholarly insights from around the world. The EUROSLA yearbook presents a selection each year of the very best research from the annual conference. Submissions are reviewed and professionally edited, and only those of the highest quality are selected. Contributions are in English.
  • Roberts, S. G., Cuskley, C., McCrohon, L., Barceló-Coblijn, L., Feher, O., & Verhoef, T. (Eds.). (2016). The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). doi:10.17617/2.2248195.
  • Robinson, E. B., St Pourcain, B., Anttila, V., Kosmicki, J. A., Bulik-Sullivan, B., Grove, J., Maller, J., Samocha, K. E., Sanders, S. J., Ripke, S., Martin, J., Hollegaard, M. V., Werge, T., Hougaard, D. M., i Psych- S. S. I. Broad Autism Group, Neale, B. M., Evans, D. M., Skuse, D., Mortensen, P. B., Borglum, A. D., Ronald, A. and 2 moreRobinson, E. B., St Pourcain, B., Anttila, V., Kosmicki, J. A., Bulik-Sullivan, B., Grove, J., Maller, J., Samocha, K. E., Sanders, S. J., Ripke, S., Martin, J., Hollegaard, M. V., Werge, T., Hougaard, D. M., i Psych- S. S. I. Broad Autism Group, Neale, B. M., Evans, D. M., Skuse, D., Mortensen, P. B., Borglum, A. D., Ronald, A., Smith, G. D., & Daly, M. J. (2016). Genetic risk for autism spectrum disorders and neuropsychiatric variation in the general population. Nature Genetics, 48, 552-555. doi:10.1038/ng.3529.

    Abstract

    Almost all genetic risk factors for autism spectrum disorders (ASDs) can be found in the general population, but the effects of this risk are unclear in people not ascertained for neuropsychiatric symptoms. Using several large ASD consortium and population-based resources (total n > 38,000), we find genome-wide genetic links between ASDs and typical variation in social behavior and adaptive functioning. This finding is evidenced through both LD score correlation and de novo variant analysis, indicating that multiple types of genetic risk for ASDs influence a continuum of behavioral and developmental traits, the severe tail of which can result in diagnosis with an ASD or other neuropsychiatric disorder. A continuum model should inform the design and interpretation of studies of neuropsychiatric disease biology.

    Additional information

    ng.3529-S1.pdf
  • Rodd, J., & Chen, A. (2016). Pitch accents show a perceptual magnet effect: Evidence of internal structure in intonation categories. In J. Barnes, A. Brugos, S. Shattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 697-701).

    Abstract

    The question of whether intonation events have a categorical mental representation has long been a puzzle in prosodic research, and one that experiments testing production and perception across category boundaries have failed to definitively resolve. This paper takes the alternative approach of looking for evidence of structure within a postulated category by testing for a Perceptual Magnet Effect (PME). PME has been found in boundary tones but has not previously been conclusively found in pitch accents. In this investigation, perceived goodness and discriminability of re-synthesised Dutch nuclear rise contours (L*H H%) were evaluated by naive native speakers of Dutch. The variation between these stimuli was quantified using a polynomial-parametric modelling approach (i.e. the SOCoPaSul model) in place of the traditional approach whereby excursion size, peak alignment and pitch register are used independently of each other to quantify variation between pitch accents. Using this approach to calculate the acoustic-perceptual distance between different stimuli, PME was detected: (1) rated goodness, decreased as acoustic-perceptual distance relative to the prototype increased, and (2) equally spaced items far from the prototype were less frequently generalised than equally spaced items in the neighbourhood of the prototype. These results support the concept of categorically distinct intonation events.

    Additional information

    Link to Speech Prosody Website
  • Rodenas-Cuadrado, P., Pietrafusa, N., Francavilla, T., La Neve, A., Striano, P., & Vernes, S. C. (2016). Characterisation of CASPR2 deficiency disorder - a syndrome involving autism, epilepsy and language impairment. BMC Medical Genetics, 17: 8. doi:10.1186/s12881-016-0272-8.

    Abstract

    Background Heterozygous mutations in CNTNAP2 have been identified in patients with a range of complex phenotypes including intellectual disability, autism and schizophrenia. However heterozygous CNTNAP2 mutations are also found in the normal population. Conversely, homozygous mutations are rare in patient populations and have not been found in any unaffected individuals. Case presentation We describe a consanguineous family carrying a deletion in CNTNAP2 predicted to abolish function of its protein product, CASPR2. Homozygous family members display epilepsy, facial dysmorphisms, severe intellectual disability and impaired language. We compared these patients with previously reported individuals carrying homozygous mutations in CNTNAP2 and identified a highly recognisable phenotype. Conclusions We propose that CASPR2 loss produces a syndrome involving early-onset refractory epilepsy, intellectual disability, language impairment and autistic features that can be recognized as CASPR2 deficiency disorder. Further screening for homozygous patients meeting these criteria, together with detailed phenotypic and molecular investigations will be crucial for understanding the contribution of CNTNAP2 to normal and disrupted development.
  • Roelofs, A. (2004). Seriality of phonological encoding in naming objects and reading their names. Memory & Cognition, 32(2), 212-222.

    Abstract

    There is a remarkable lack of research bringing together the literatures on oral reading and speaking.
    As concerns phonological encoding, both models of reading and speaking assume a process of segmental
    spellout for words, which is followed by serial prosodification in models of speaking (e.g., Levelt,
    Roelofs, & Meyer, 1999). Thus, a natural place to merge models of reading and speaking would be
    at the level of segmental spellout. This view predicts similar seriality effects in reading and object naming.
    Experiment 1 showed that the seriality of encoding inside a syllable revealed in previous studies
    of speaking is observed for both naming objects and reading their names. Experiment 2 showed that
    both object naming and reading exhibit the seriality of the encoding of successive syllables previously
    observed for speaking. Experiment 3 showed that the seriality is also observed when object naming and
    reading trials are mixed rather than tested separately, as in the first two experiments. These results suggest
    that a serial phonological encoding mechanism is shared between naming objects and reading
    their names.
  • Roelofs, A. (2004). The seduced speaker: Modeling of cognitive control. In A. Belz, R. Evans, & P. Piwek (Eds.), Natural language generation. (pp. 1-10). Berlin: Springer.

    Abstract

    Although humans are the ultimate “natural language generators”, the area of psycholinguistic modeling has been somewhat underrepresented in recent approaches to Natural Language Generation in computer science. To draw attention to the area and illustrate its potential relevance to Natural Language Generation, I provide an overview of recent work on psycholinguistic modeling of language production together with some key empirical findings, state-of-the-art experimental techniques, and their historical roots. The techniques include analyses of speech-error corpora, chronometric analyses, eyetracking, and neuroimaging.
    The overview is built around the issue of cognitive control in natural language generation, concentrating on the production of single words, which is an essential ingredient of the generation of larger utterances. Most of the work exploited the fact that human speakers are good but not perfect at resisting temptation, which has provided some critical clues about the nature of the underlying system.
  • Roelofs, A., Piai, V., Garrido Rodriguez, G., & Chwilla, D. J. (2016). Electrophysiology of Cross-Language Interference and Facilitation in Picture Naming. Cortex, 76, 1-16. doi:10.1016/j.cortex.2015.12.003.

    Abstract

    Disagreement exists about how bilingual speakers select words, in particular, whether words in another language compete, or competition is restricted to a target language, or no competition occurs. Evidence that competition occurs but is restricted to a target language comes from response time (RT) effects obtained when speakers name pictures in one language while trying to ignore distractor words in another language. Compared to unrelated distractor words, RT is longer when the picture name and distractor are semantically related, but RT is shorter when the distractor is the translation of the name of the picture in the other language. These effects suggest that distractor words from another language do not compete themselves but activate their counterparts in the target language, thereby yielding the semantic interference and translation facilitation effects. Here, we report an event-related brain potential (ERP) study testing the prediction that priming underlies both of these effects. The RTs showed semantic interference and translation facilitation effects. Moreover, the picture-word stimuli yielded an N400 response, whose amplitude was smaller on semantic and translation trials than on unrelated trials, providing evidence that interference and facilitation priming underlie the RT effects. We present the results of computer simulations showing the utility of a within-language competition account of our findings.
  • Roelofs, A., Meyer, A. S., & Levelt, W. J. M. (1998). A case for the lemma/lexeme distinction in models of speaking: Comment on Caramazza and Miozzo (1997). Cognition, 69(2), 219-230. doi:10.1016/S0010-0277(98)00056-0.

    Abstract

    In a recent series of papers, Caramazza and Miozzo [Caramazza, A., 1997. How many levels of processing are there in lexical access? Cognitive Neuropsychology 14, 177-208; Caramazza, A., Miozzo, M., 1997. The relation between syntactic and phonological knowledge in lexical access: evidence from the 'tip-of-the-tongue' phenomenon. Cognition 64, 309-343; Miozzo, M., Caramazza, A., 1997. On knowing the auxiliary of a verb that cannot be named: evidence for the independence of grammatical and phonological aspects of lexical knowledge. Journal of Cognitive Neuropsychology 9, 160-166] argued against the lemma/lexeme distinction made in many models of lexical access in speaking, including our network model [Roelofs, A., 1992. A spreading-activation theory of lemma retrieval in speaking. Cognition 42, 107-142; Levelt, W.J.M., Roelofs, A., Meyer, A.S., 1998. A theory of lexical access in speech production. Behavioral and Brain Sciences, (in press)]. Their case was based on the observations that grammatical class deficits of brain-damaged patients and semantic errors may be restricted to either spoken or written forms and that the grammatical gender of a word and information about its form can be independently available in tip-of-the-tongue stales (TOTs). In this paper, we argue that though our model is about speaking, not taking position on writing, extensions to writing are possible that are compatible with the evidence from aphasia and speech errors. Furthermore, our model does not predict a dependency between gender and form retrieval in TOTs. Finally, we argue that Caramazza and Miozzo have not accounted for important parts of the evidence motivating the lemma/lexeme distinction, such as word frequency effects in homophone production, the strict ordering of gender and pho neme access in LRP data, and the chronometric and speech error evidence for the production of complex morphology.
  • Roelofs, A. (2004). Error biases in spoken word planning and monitoring by aphasic and nonaphasic speakers: Comment on Rapp and Goldrick,2000. Psychological Review, 111(2), 561-572. doi:10.1037/0033-295X.111.2.561.

    Abstract

    B. Rapp and M. Goldrick (2000) claimed that the lexical and mixed error biases in picture naming by
    aphasic and nonaphasic speakers argue against models that assume a feedforward-only relationship
    between lexical items and their sounds in spoken word production. The author contests this claim by
    showing that a feedforward-only model like WEAVER ++ (W. J. M. Levelt, A. Roelofs, & A. S. Meyer,
    1999b) exhibits the error biases in word planning and self-monitoring. Furthermore, it is argued that
    extant feedback accounts of the error biases and relevant chronometric effects are incompatible.
    WEAVER ++ simulations with self-monitoring revealed that this model accounts for the chronometric
    data, the error biases, and the influence of the impairment locus in aphasic speakers.
  • Roelofs, A. (2004). Comprehension-based versus production-internal feedback in planning spoken words: A rejoinder to Rapp and Goldrick, 2004. Psychological Review, 111(2), 579-580. doi:10.1037/0033-295X.111.2.579.

    Abstract

    WEAVER++ has no backward links in its form-production network and yet is able to explain the lexical
    and mixed error biases and the mixed distractor latency effect. This refutes the claim of B. Rapp and M.
    Goldrick (2000) that these findings specifically support production-internal feedback. Whether their restricted interaction account model can also provide a unified account of the error biases and latency effect remains to be shown.
  • Roelofs, A., & Meyer, A. S. (1998). Metrical structure in planning the production of spoken words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 922-939. doi:10.1037/0278-7393.24.4.922.

    Abstract

    According to most models of speech production, the planning of spoken words involves the independent retrieval of segments and metrical frames followed by segment-to-frame association. In some models, the metrical frame includes a specification of the number and ordering of consonants and vowels, but in the word-form encoding by activation and verification (WEAVER) model (A. Roelofs, 1997), the frame specifies only the stress pattern across syllables. In 6 implicit priming experiments, on each trial, participants produced 1 word out of a small set as quickly as possible. In homogeneous sets, the response words shared word-initial segments, whereas in heterogeneous sets, they did not. Priming effects from shared segments depended on all response words having the same number of syllables and stress pattern, but not on their having the same number of consonants and vowels. No priming occurred when the response words had only the same metrical frame but shared no segments. Computer simulations demonstrated that WEAVER accounts for the findings.
  • Roelofs, A., & Schiller, N. (2004). Produzieren von Ein- und Mehrwortäusserungen. In G. Plehn (Ed.), Jahrbuch der Max-Planck Gesellschaft (pp. 655-658). Göttingen: Vandenhoeck & Ruprecht.
  • Roelofs, A. (1998). Rightward incrementality in encoding simple phrasal forms in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 904-921. doi:10.1037/0278-7393.24.4.904.

    Abstract

    This article reports 7 experiments investigating whether utterances are planned in a parallel or rightward incremental fashion during language production. The experiments examined the role of linear order, length, frequency, and repetition in producing Dutch verb–particle combinations. On each trial, participants produced 1 utterance out of a set of 3 as quickly as possible. The responses shared part of their form or not. For particle-initial infinitives, facilitation was obtained when the responses shared the particle but not when they shared the verb. For verb-initial imperatives, however, facilitation was obtained for the verbs but not for the particles. The facilitation increased with length, decreased with frequency, and was independent of repetition. A simple rightward incremental model accounts quantitatively for the results.
  • Rojas-Berscia, L. M. (2016). Lóxoro, traces of a contemporary Peruvian genderlect. Borealis: An International Journal of Hispanic Linguistics, 5, 157-170.

    Abstract

    Not long after the premiere of Loxoro in 2011, a short-film by Claudia Llosa which presents the problems the transgender community faces in the capital of Peru, a new language variety became visible for the first time to the Lima society. Lóxoro [‘lok.so.ɾo] or Húngaro [‘uŋ.ga.ɾo], as its speakers call it, is a language spoken by transsexuals and the gay community of Peru. The first clues about its existence were given by a comedian, Fernando Armas, in the mid 90’s, however it is said to have appeared not before the 60’s. Following some previous work on gay languages by Baker (2002) and languages and society (cf. Halliday 1978), the main aim of the present article is to provide a primary sketch of this language in its phonological, morphological, lexical and sociological aspects, based on a small corpus extracted from the film of Llosa and natural dialogues from Peruvian TV-journals, in order to classify this variety within modern sociolinguistic models (cf. Muysken 2010) and argue for the “anti-language” (cf. Halliday 1978) nature of it
  • Romberg, A., Zhang, Y., Newman, B., Triesch, J., & Yu, C. (2016). Global and local statistical regularities control visual attention to object sequences. In Proceedings of the 2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) (pp. 262-267).

    Abstract

    Many previous studies have shown that both infants and adults are skilled statistical learners. Because statistical learning is affected by attention, learners' ability to manage their attention can play a large role in what they learn. However, it is still unclear how learners allocate their attention in order to gain information in a visual environment containing multiple objects, especially how prior visual experience (i.e., familiarly of objects) influences where people look. To answer these questions, we collected eye movement data from adults exploring multiple novel objects while manipulating object familiarity with global (frequencies) and local (repetitions) regularities. We found that participants are sensitive to both global and local statistics embedded in their visual environment and they dynamically shift their attention to prioritize some objects over others as they gain knowledge of the objects and their distributions within the task.
  • Rossano, F., Brown, P., & Levinson, S. C. (2009). Gaze, questioning and culture. In J. Sidnell (Ed.), Conversation analysis: Comparative perspectives (pp. 187-249). Cambridge University Press.

    Abstract

    Relatively little work has examined the function of gaze in interaction. Previous research has mainly addressed issues such as next speaker selection (e.g. Lerner 2003) or engagement and disengagement in the conversation (Goodwin 1981). It has looked for gaze behavior in relation to the roles participants are enacting locally, (e.g., speaker or hearer) and in relation to the unit “turn” in the turn taking system (Goodwin 1980, 1981; Kendon 1967). In his seminal work Kendon (1967) claimed that “there is a very clear and quite consistent pattern, namely, that [the speaker] tends to look away as he begins a long utterance, and in many cases somewhat in advance of it; and that he looks up at his interlocutor as the end of the long utterance approaches, usually during the last phase, and he continues to look thereafter.” Goodwin (Goodwin 1980) introducing the listener into the picture proposed the following two rules: Rule1: A speaker should obtain the gaze of his recipient during the course of a turn of talk. Rule2: a recipient should be gazing at the speaker when the speaker is gazing at the hearer. Rossano’s work (2005) has suggested the possibility of a different level of order for gaze in interaction: the sequential level. In particular he found that gaze withdrawal after sustained mutual gaze tends to occur at sequence possible completion and if both participants withdraw the sequence is complete. By sequence here we refer to a unit that is structured around the notion of adjacency pair. The latter refers to two turns uttered by different speakers orderly organized (first part and second part) and pair type related (greeting-greeting, question-answer). These two turns are related by conditional relevance (Schegloff 1968) that is to say that the first part requires the production of the second and the absence of the latter is noticeable and accountable. Question-anwers are very typical examples of adjacency pairs. In this paper we compare the use of gaze in question-answer sequences in three different populations: Italians, speakers of Mayan Tzeltal (Mexico) and speakers of Yeli Ndye (Russel Island, Papua New Guinea). Relying mainly on dyadic interactions and ordinary conversation we will provide a comparison of the occurrence of gaze in each turn (to compare with the claims of Goodwin and Kendon) and we will describe whether gaze has any effect on the other participant response and whether it persists also during the answer. The three languages and cultures that will be compared here belong to three different continents and have been previously described as potentially following opposite rules: for speakers of Italian and Yeli Ndye unproblematic and preferred engagement of mutual gaze while for speakers of Tzeltal strong mutual gaze avoidance. This paper tries to provide an accurate description of their gaze behavior in this specific type of sequential conversation.
  • Rossano, F. (2004). Per una semiotica dell'interazione: Analisi del rapporto tra sguardo, corpo e parola in alcune interazione faccia a faccia. Master Thesis, Università di Bologna, Bologna, Italy.
  • Rossi, G. (2009). Il discorso scritto interattivo degli SMS: Uno studio pragmatico del "messaggiare". Rivista Italiana di Dialettologia, 33, 143-193. doi:10.1400/148734.
  • Rossi, G., & Zinken, J. (2016). Grammar and social agency: The pragmatics of impersonal deontic statements. Language, 92(4), e296-e325. doi:10.1353/lan.2016.0083.

    Abstract

    Sentence and construction types generally have more than one pragmatic function. Impersonal deontic declaratives such as ‘it is necessary to X’ assert the existence of an obligation or necessity without tying it to any particular individual. This family of statements can accomplish a range of functions, including getting another person to act, explaining or justifying the speaker’s own behavior as he or she undertakes to do something, or even justifying the speaker’s behavior while simultaneously getting another person to help. How is an impersonal deontic declarative fit for these different functions? And how do people know which function it has in a given context? We address these questions using video recordings of everyday interactions among speakers of Italian and Polish. Our analysis results in two findings. The first is that the pragmatics of impersonal deontic declaratives is systematically shaped by (i) the relative responsibility of participants for the necessary task and (ii) the speaker’s nonverbal conduct at the time of the statement. These two factors influence whether the task in question will be dealt with by another person or by the speaker, often giving the statement the force of a request or, alternatively, of an account of the speaker’s behavior. The second finding is that, although these factors systematically influence their function, impersonal deontic declaratives maintain the potential to generate more complex interactions that go beyond a simple opposition between requests and accounts, where participation in the necessary task may be shared, negotiated, or avoided. This versatility of impersonal deontic declaratives derives from their grammatical makeup: by being deontic and impersonal, they can both mobilize or legitimize an act by different participants in the speech event, while their declarative form does not constrain how they should be responded to. These features make impersonal deontic declaratives a special tool for the management of social agency.
  • Rowbotham, S. J., Holler, J., Wearden, A., & Lloyd, D. M. (2016). I see how you feel: Recipients obtain additional information from speakers’ gestures about pain. Patient Education and Counseling, 99(8), 1333-1342. doi:10.1016/j.pec.2016.03.007.

    Abstract

    Objective

    Despite the need for effective pain communication, pain is difficult to verbalise. Co-speech gestures frequently add information about pain that is not contained in the accompanying speech. We explored whether recipients can obtain additional information from gestures about the pain that is being described.
    Methods

    Participants (n = 135) viewed clips of pain descriptions under one of four conditions: 1) Speech Only; 2) Speech and Gesture; 3) Speech, Gesture and Face; and 4) Speech, Gesture and Face plus Instruction (short presentation explaining the pain information that gestures can depict). Participants provided free-text descriptions of the pain that had been described. Responses were scored for the amount of information obtained from the original clips.
    Findings

    Participants in the Instruction condition obtained the most information, while those in the Speech Only condition obtained the least (all comparisons p<.001).
    Conclusions

    Gestures produced during pain descriptions provide additional information about pain that recipients are able to pick up without detriment to their uptake of spoken information.
    Practice implications

    Healthcare professionals may benefit from instruction in gestures to enhance uptake of information about patients’ pain experiences.
  • Rowland, C. F., & Theakston, A. L. (2009). The acquisition of auxiliary syntax: A longitudinal elicitation study. Part 2: The modals and auxiliary DO. Journal of Speech, Language, and Hearing Research, 52, 1471-1492. doi:10.1044/1092-4388(2009/08-0037a).

    Abstract

    Purpose: The study of auxiliary acquisition is central to work on language development and has attracted theoretical work from both nativist and constructivist approaches. This study is part of a 2-part companion set that represents a unique attempt to trace the development of auxiliary syntax by using a longitudinal elicitation methodology. The aim of the research described in this part is to track the development of modal auxiliaries and auxiliary DO in questions and declaratives to provide a more complete picture of the development of the auxiliary system in English-speaking children. Method: Twelve English-speaking children participated in 2 tasks designed to elicit auxiliaries CAN, WILL, and DOES in declaratives and yes/no questions. They completed each task 6 times in total between the ages of 2;10 (years;months) and 3;6. Results: The children’s levels of correct use of the target auxiliaries differed in complex ways according to auxiliary, polarity, and sentence structure, and these relations changed over development. An analysis of the children’s errors also revealed complex interactions between these factors. Conclusions: These data cannot be explained in full by existing theories of auxiliary acquisition. Researchers working within both generativist and constructivist frameworks need to develop more detailed theories of acquisition that predict the pattern of acquisition observed.
  • Rubio-Fernández, P., Cummins, C., & Tian, Y. (2016). Are single and extended metaphors processed differently? A test of two Relevance-Theoretic accounts. Journal of Pragmatics, 94, 15-28. doi:10.1016/j.pragma.2016.01.005.

    Abstract

    Carston (2010) proposes that metaphors can be processed via two different routes. In line with the standard Relevance-Theoretic account of loose use, single metaphors are interpreted by a local pragmatic process of meaning adjustment, resulting in the construction of an ad hoc concept. In extended metaphorical passages, by contrast, the reader switches to a second processing mode because the various semantic associates in the passage are mutually reinforcing, which makes the literal meaning highly activated relative to possible meaning adjustments. In the second processing mode the literal meaning of the whole passage is metarepresented and entertained as an ‘imaginary world’ and the intended figurative implications are derived later in processing. The results of three experiments comparing the interpretation of the same target expressions across literal, single-metaphorical and extended-metaphorical contexts, using self-paced reading (Experiment 1), eye-tracking during natural reading (Experiment 2) and cued recall (Experiment 3), offered initial support to Carston's distinction between the processing of single and extended metaphors. We end with a comparison between extended metaphors and allegories, and make a call for further theoretical and experimental work to increase our understanding of the similarities and differences between the interpretation and processing of different figurative uses, single and extended.
  • Rubio-Fernández, P. (2016). How redundant are redundant color adjectives? An efficiency-based analysis of color overspecification. Frontiers in Psychology, 7: 153. doi:10.3389/fpsyg.2016.00153.

    Abstract

    Color adjectives tend to be used redundantly in referential communication. I propose that redundant color adjectives (RCAs) are often intended to exploit a color contrast in the visual context and hence facilitate object identification, despite not being necessary to establish unique reference. Two language-production experiments investigated two types of factors that may affect the use of RCAs: factors related to the efficiency of color in the visual context and factors related to the semantic category of the noun. The results of Experiment 1 confirmed that people produce RCAs when color may facilitate object recognition; e.g., they do so more often in polychrome displays than in monochrome displays, and more often in English (pre-nominal position) than in Spanish (post-nominal position). RCAs are also used when color is a central property of the object category; e.g., people referred to the color of clothes more often than to the color of geometrical figures (Experiment 1), and they overspecified atypical colors more often than variable and stereotypical colors (Experiment 2). These results are relevant for pragmatic models of referential communication based on Gricean pragmatics and informativeness. An alternative analysis is proposed, which focuses on the efficiency and pertinence of color in a given referential situation.
  • Rubio-Fernández, P., & Grassmann, S. (2016). Metaphors as second labels: Difficult for preschool children? Journal of Psycholinguistic Research, 45, 931-944. doi:10.1007/s10936-015-9386-y.

    Abstract

    This study investigates the development of two cognitive abilities that are involved in metaphor comprehension: implicit analogical reasoning and assigning an unconventional label to a familiar entity (as in Romeo’s ‘Juliet is the sun’). We presented 3- and 4-year-old children with literal object-requests in a pretense setting (e.g., ‘Give me the train with the hat’). Both age-groups succeeded in a baseline condition that used building blocks as props (e.g., placed either on the front or the rear of a train engine) and only required spatial analogical reasoning to interpret the referential expression. Both age-groups performed significantly worse in the critical condition, which used familiar objects as props (e.g., small dogs as pretend hats) and required both implicit analogical reasoning and assigning second labels. Only the 4-year olds succeeded in this condition. These results offer a new perspective on young children’s difficulties with metaphor comprehension in the preschool years.
  • Rubio-Fernández, P., & Geurts, B. (2016). Don’t mention the marble! The role of attentional processes in false-belief tasks. Review of Philosophy and Psychology, 7, 835-850. doi:10.1007/s13164-015-0290-z.
  • De Ruiter, J. P. (2004). On the primacy of language in multimodal communication. In Workshop Proceedings on Multimodal Corpora: Models of Human Behaviour for the Specification and Evaluation of Multimodal Input and Output Interfaces.(LREC2004) (pp. 38-41). Paris: ELRA - European Language Resources Association (CD-ROM).

    Abstract

    In this paper, I will argue that although the study of multimodal interaction offers exciting new prospects for Human Computer Interaction and human-human communication research, language is the primary form of communication, even in multimodal systems. I will support this claim with theoretical and empirical arguments, mainly drawn from human-human communication research, and will discuss the implications for multimodal communication research and Human-Computer Interaction.
  • De Ruiter, J. P. (1998). Gesture and speech production. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057686.
  • De Ruiter, J. P. (2004). Response systems and signals of recipiency. In A. Majid (Ed.), Field Manual Volume 9 (pp. 53-55). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506961.

    Abstract

    Listeners’ signals of recipiency, such as “Mm-hm” or “uh-huh” in English, are the most elementary or minimal “conversational turns” possible. Minimal, because apart from acknowledging recipiency and inviting the speaker to continue with his/her next turn, they do not add any new information to the discourse of the conversation. The goal of this project is to gather cross cultural information on listeners’ feedback behaviour during conversation. Listeners in a conversation usually provide short signals that indicate to the speaker that they are still “with the speaker”. These signals could be verbal (like for instance “mm hm” in English or “hm hm” in Dutch) or nonverbal (visual), like nodding. Often, these signals are produced in overlap with the speaker’s vocalisation. If listeners do not produce these signals, speakers often invite them explicitly (e.g. “are you still there?” in a telephone conversation). Our goal is to investigate what kind of signals are used by listeners of different languages to signal “recipiency” to the speaker.
  • De Ruiter, L. E. (2009). The prosodic marking of topical referents in the German "Vorfeld" by children and adults. The Linguistic Review, 26, 329-354. doi:10.1515/tlir.2009.012.

    Abstract

    This article reports on the analysis of prosodic marking of topical referents in the German prefield by 5- and 7-year-old children and adults. Natural speech data was obtained from a picture-elicited narration task. The data was analyzed both phonologically and phonetically. In line with previous findings, adult speakers realized topical referents predominantly with the accents L+H* and L*+H, but H* accents and unaccented items were also observed. Children used the same accent types as adults, but the accent types were distributed differently. Also, children aligned pitch minima earlier than adults and produced accents with a decreased speed of pitch change. Possible reasons for these findings are discussed. Contrast – defined in terms of a change of subjecthood – did not affect the choice of pitch accent type and did not influence phonetic realization, underlining the fact that accentuation is often a matter of individual speaker choice.

    Files private

    Request files
  • Russel, A., & Trilsbeek, P. (2004). ELAN Audio Playback. Language Archive Newsletter, 1(4), 12-13.
  • Russel, A., & Wittenburg, P. (2004). ELAN Native Media Handling. Language Archive Newsletter, 1(3), 12-12.
  • Sach, M., Seitz, R. J., & Indefrey, P. (2004). Unified inflectional processing of regular and irregular verbs: A PET study. NeuroReport, 15(3), 533-537. doi:10.1097/01.wnr.0000113529.32218.92.

    Abstract

    Psycholinguistic theories propose different models of inflectional processing of regular and irregular verbs: dual mechanism models assume separate modules with lexical frequency sensitivity for irregular verbs. In contradistinction, connectionist models propose a unified process in a single module.We conducted a PET study using a 2 x 2 design with verb regularity and frequency.We found significantly shorter voice onset times for regular verbs and high frequency verbs irrespective of regularity. The PET data showed activations in inferior frontal gyrus (BA 45), nucleus lentiformis, thalamus, and superior medial cerebellum for both regular and irregular verbs but no dissociation for verb regularity.Our results support common processing components for regular and irregular verb inflection.
  • Salomo, D., & Liszkowski, U. (2009). Socialisation of prelinguistic communication. In A. Majid (Ed.), Field manual volume 12 (pp. 56-57). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.844597.

    Abstract

    Little is known about cultural differences in interactional practices with infants. The goal of this task is to document the nature and emergence of caregiver-infant interaction/ communication in different cultures. There are two tasks: Task 1 – a brief documentation about the culture under investigation with respect to infant-caregiver interaction and parental beliefs. Task 2 – the “decorated room”, a task designed to elicit infant and caregiver.
  • San Roque, L. (2016). 'Where' questions and their responses in Duna (Papua New Guinea). Open Linguistics, 2(1), 85-104. doi:10.1515/opli-2016-0005.

    Abstract

    Despite their central role in question formation, content interrogatives in spontaneous conversation remain relatively under-explored cross-linguistically. This paper outlines the structure of ‘where’ expressions in Duna, a language spoken in Papua New Guinea, and examines where-questions in a small Duna data set in terms of their frequency, function, and the responses they elicit. Questions that ask ‘where?’ have been identified as a useful tool in studying the language of space and place, and, in the Duna case and elsewhere, show high frequency and functional flexibility. Although where-questions formulate place as an information gap, they are not always answered through direct reference to canonical places. While some question types may be especially “socially costly” (Levinson 2012), asking ‘where’ perhaps provides a relatively innocuous way of bringing a particular event or situation into focus.
  • Sánchez-Fernández, M., & Rojas-Berscia, L. M. (2016). Vitalidad lingüística de la lengua paipai de Santa Catarina, Baja California. LIAMES, 16(1), 157-183. doi:10.20396/liames.v16i1.8646171.

    Abstract

    In the last few decades little to nothing has been said about the sociolinguistic situation of Yumanan languages in Mexico. In order to cope with this lack of studies, we present a first study on linguistic vitality in Paipai, as it is spoken in Santa Catarina, Baja California, Mexico. Since languages such as Mexican Spanish and Ko’ahl coexist with this language in the same ecology, both are part of the study as well. This first approach hoists from two axes: on one hand, providing a theoretical framework that explains the sociolinguistic dynamics in the ecology of the language (Mufwene 2001), and, on the other hand, bringing over a quantitative study based on MSF (Maximum Shared Facility) (Terborg & Garcìa 2011), which explains the state of linguistic vitality of paipai, enriched by qualitative information collected in situ
  • Sankoff, G., & Brown, P. (2009). The origins of syntax in discourse: A case study of Tok Pisin relatives [reprint of 1976 article in Language]. In J. Holm, & S. Michaelis (Eds.), Contact languages (vol. II) (pp. 433-476). London: Routledge.
  • Sassenhagen, J., & Alday, P. M. (2016). A common misapplication of statistical inference: Nuisance control with null-hypothesis significance tests. Brain and Language, 162, 42-45. doi:10.1016/j.bandl.2016.08.001.

    Abstract

    Experimental research on behavior and cognition frequently rests on stimulus or subject selection where not all characteristics can be fully controlled, even when attempting strict matching. For example, when contrasting patients to controls, variables such as intelligence or socioeconomic status are often correlated with patient status. Similarly, when presenting word stimuli, variables such as word frequency are often correlated with primary variables of interest. One procedure very commonly employed to control for such nuisance effects is conducting inferential tests on confounding stimulus or subject characteristics. For example, if word length is not significantly different for two stimulus sets, they are considered as matched for word length. Such a test has high error rates and is conceptually misguided. It reflects a common misunderstanding of statistical tests: interpreting significance not to refer to inference about a particular population parameter, but about 1. the sample in question, 2. the practical relevance of a sample difference (so that a nonsignificant test is taken to indicate evidence for the absence of relevant differences). We show inferential testing for assessing nuisance effects to be inappropriate both pragmatically and philosophically, present a survey showing its high prevalence, and briefly discuss an alternative in the form of regression including nuisance variables.
  • Sauppe, S. (2016). Verbal semantics drives early anticipatory eye movements during the comprehension of verb-initial sentences. Frontiers in Psychology, 7: 95. doi:10.3389/fpsyg.2016.00095.

    Abstract

    Studies on anticipatory processes during sentence comprehension often focus on the prediction of postverbal direct objects. In subject-initial languages (the target of most studies so far), however, the position in the sentence, the syntactic function, and the semantic role of arguments are often conflated. For example, in the sentence “The frog will eat the fly” the syntactic object (“fly”) is at the same time also the last word and the patient argument of the verb. It is therefore not apparent which kind of information listeners orient to for predictive processing during sentence comprehension. A visual world eye tracking study on the verb-initial language Tagalog (Austronesian) tested what kind of information listeners use to anticipate upcoming postverbal linguistic input. The grammatical structure of Tagalog allows to test whether listeners' anticipatory gaze behavior is guided by predictions of the linear order of words, by syntactic functions (e.g., subject/object), or by semantic roles (agent/patient). Participants heard sentences of the type “Eat frog fly” or “Eat fly frog” (both meaning “The frog will eat the fly”) while looking at displays containing an agent referent (“frog”), a patient referent (“fly”) and a distractor. The verb carried morphological marking that allowed the order and syntactic function of agent and patient to be inferred. After having heard the verb, listeners fixated on the agent irrespective of its syntactic function or position in the sentence. While hearing the first-mentioned argument, listeners fixated on the corresponding referent in the display accordingly and then initiated saccades to the last-mentioned referent before it was encountered. The results indicate that listeners used verbal semantics to identify referents and their semantic roles early; information about word order or syntactic functions did not influence anticipatory gaze behavior directly after the verb was heard. In this verb-initial language, event semantics takes early precedence during the comprehension of sentences, while arguments are anticipated temporally more local to when they are encountered. The current experiment thus helps to better understand anticipation during language processing by employing linguistic structures not available in previously studied subject-initial languages.
  • Sauter, D., Scott, S., & Calder, A. (2004). Categorisation of vocally expressed positive emotion: A first step towards basic positive emotions? [Abstract]. Proceedings of the British Psychological Society, 12, 111.

    Abstract

    Most of the study of basic emotion expressions has focused on facial expressions and little work has been done to specifically investigate happiness, the only positive of the basic emotions (Ekman & Friesen, 1971). However, a theoretical suggestion has been made that happiness could be broken down into discrete positive emotions, which each fulfil the criteria of basic emotions, and that these would be expressed vocally (Ekman, 1992). To empirically test this hypothesis, 20 participants categorised 80 paralinguistic sounds using the labels achievement, amusement, contentment, pleasure and relief. The results suggest that achievement, amusement and relief are perceived as distinct categories, which subjects accurately identify. In contrast, the categories of contentment and pleasure were systematically confused with other responses, although performance was still well above chance levels. These findings are initial evidence that the positive emotions engage distinct vocal expressions and may be considered to be distinct emotion categories.
  • Sauter, D. (2009). Emotion concepts. In A. Majid (Ed.), Field manual volume 12 (pp. 20-30). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883578.

    Abstract

    The goal of this task is to investigate emotional categories across linguistic and cultural boundaries. There are three core tasks. In order to conduct this task you will need emotional vocalisation stimuli on your computer and you must translate the scenarios at the end of this entry into your local language.
  • Sauter, D., Eisner, F., Ekman, P., & Scott, S. K. (2009). Universal vocal signals of emotion. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (CogSci 2009) (pp. 2251-2255). Cognitive Science Society.

    Abstract

    Emotional signals allow for the sharing of important information with conspecifics, for example to warn them of danger. Humans use a range of different cues to communicate to others how they feel, including facial, vocal, and gestural signals. Although much is known about facial expressions of emotion, less research has focused on affect in the voice. We compare British listeners to individuals from remote Namibian villages who have had no exposure to Western culture, and examine recognition of non-verbal emotional vocalizations, such as screams and laughs. We show that a number of emotions can be universally recognized from non-verbal vocal signals. In addition we demonstrate the specificity of this pattern, with a set of additional emotions only recognized within, but not across these cultural groups. Our findings indicate that a small set of primarily negative emotions have evolved signals across several modalities, while most positive emotions are communicated with culture-specific signals.
  • Scerri, T. S., Fisher, S. E., Francks, C., MacPhie, I. L., Paracchini, S., Richardson, A. J., Stein, J. F., & Monaco, A. P. (2004). Putative functional alleles of DYX1C1 are not associated with dyslexia susceptibility in a large sample of sibling pairs from the UK [Letter to JMG]. Journal of Medical Genetics, 41(11), 853-857. doi:10.1136/jmg.2004.018341.
  • Schapper, A., San Roque, L., & Hendery, R. (2016). Tree, firewood and fire in the languages of Sahul. In P. Juvonen (Ed.), The Lexical Typology of Semantic Shifts (pp. 355-422). Berlin: de Gruyter Mouton.
  • Scharenborg, O., Boves, L., & Ten Bosch, L. (2004). ‘On-line early recognition’ of polysyllabic words in continuous speech. In S. Cassidy, F. Cox, R. Mannell, & P. Sallyanne (Eds.), Proceedings of the Tenth Australian International Conference on Speech Science & Technology (pp. 387-392). Canberra: Australian Speech Science and Technology Association Inc.

    Abstract

    In this paper, we investigate the ability of SpeM, our recognition system based on the combination of an automatic phone recogniser and a wordsearch module, to determine as early as possible during the word recognition process whether a word is likely to be recognised correctly (this we refer to as ‘on-line’ early word recognition). We present two measures that can be used to predict whether a word is correctly recognised: the Bayesian word activation and the amount of available (acoustic) information for a word. SpeM was tested on 1,463 polysyllabic words in 885 continuous speech utterances. The investigated predictors indicated that a word activation that is 1) high (but not too high) and 2) based on more phones is more reliable to predict the correctness of a word than a similarly high value based on a small number of phones or a lower value of the word activation.
  • Scharenborg, O., & Okolowski, S. (2009). Lexical embedding in spoken Dutch. In INTERSPEECH 2009 - 10th Annual Conference of the International Speech Communication Association (pp. 1879-1882). ISCA Archive.

    Abstract

    A stretch of speech is often consistent with multiple words, e.g., the sequence /hæm/ is consistent with ‘ham’ but also with the first syllable of ‘hamster’, resulting in temporary ambiguity. However, to what degree does this lexical embedding occur? Analyses on two corpora of spoken Dutch showed that 11.9%-19.5% of polysyllabic word tokens have word-initial embedding, while 4.1%-7.5% of monosyllabic word tokens can appear word-initially embedded. This is much lower than suggested by an analysis of a large dictionary of Dutch. Speech processing thus appears to be simpler than one might expect on the basis of statistics on a dictionary.
  • Scharenborg, O. (2009). Using durational cues in a computational model of spoken-word recognition. In INTERSPEECH 2009 - 10th Annual Conference of the International Speech Communication Association (pp. 1675-1678). ISCA Archive.

    Abstract

    Evidence that listeners use durational cues to help resolve temporarily ambiguous speech input has accumulated over the past few years. In this paper, we investigate whether durational cues are also beneficial for word recognition in a computational model of spoken-word recognition. Two sets of simulations were carried out using the acoustic signal as input. The simulations showed that the computational model, like humans, takes benefit from durational cues during word recognition, and uses these to disambiguate the speech signal. These results thus provide support for the theory that durational cues play a role in spoken-word recognition.
  • Scheeringa, R., Petersson, K. M., Oostenveld, R., Norris, D. G., Hagoort, P., & Bastiaansen, M. C. M. (2009). Trial-by-trial coupling between EEG and BOLD identifies networks related to alpha and theta EEG power increases during working memory maintenance. Neuroimage, 44, 1224-1238. doi:10.1016/j.neuroimage.2008.08.041.

    Abstract

    PET and fMRI experiments have previously shown that several brain regions in the frontal and parietal lobe are involved in working memory maintenance. MEG and EEG experiments have shown parametric increases with load for oscillatory activity in posterior alpha and frontal theta power. In the current study we investigated whether the areas found with fMRI can be associated with these alpha and theta effects by measuring simultaneous EEG and fMRI during a modified Sternberg task This allowed us to correlate EEG at the single trial level with the fMRI BOLD signal by forming a regressor based on single trial alpha and theta
    power estimates. We observed a right posterior, parametric alpha power increase, which was functionally related to decreases in BOLD in the primary visual cortex and in the posterior part of the right middle temporal gyrus. We relate this finding to the inhibition of neuronal activity that may interfere with WM maintenance. An observed parametric increase in frontal theta power was correlated to a decrease in BOLD in
    regions that together form the default mode network. We did not observe correlations between oscillatory EEG phenomena and BOLD in the traditional WM areas. In conclusion, the study shows that simultaneous EEG fMRI recordings can be successfully used to identify the emergence of functional networks in the brain during the execution of a cognitive task.
  • Schepens, J., Van der Silk, F., & Van Hout, R. (2016). L1 and L2 Distance Effects in Learning L3 Dutch. Language Learning, 66, 224-256. doi:10.1111/lang.12150.

    Abstract

    Many people speak more than two languages. How do languages acquired earlier affect the learnability of additional languages? We show that linguistic distances between speakers' first (L1) and second (L2) languages and their third (L3) language play a role. Larger distances from the L1 to the L3 and from the L2 to the L3 correlate with lower degrees of L3 learnability. The evidence comes from L3 Dutch speaking proficiency test scores obtained by candidates who speak a diverse set of L1s and L2s. Lexical and morphological distances between the L1s of the learners and Dutch explained 47.7% of the variation in proficiency scores. Lexical and morphological distances between the L2s of the learners and Dutch explained 32.4% of the variation in proficiency scores in multilingual learners. Cross-linguistic differences require language learners to bridge varying linguistic gaps between their L1 and L2 competences and the target language.
  • Schiller, N. O., Fikkert, P., & Levelt, C. C. (2004). Stress priming in picture naming: An SOA study. Brain and Language, 90(1-3), 231-240. doi:10.1016/S0093-934X(03)00436-X.

    Abstract

    This study investigates whether or not the representation of lexical stress information can be primed during speech production. In four experiments, we attempted to prime the stress position of bisyllabic target nouns (picture names) having initial and final stress with auditory prime words having either the same or different stress as the target (e.g., WORtel–MOtor vs. koSTUUM–MOtor; capital letters indicate stressed syllables in prime–target pairs). Furthermore, half of the prime words were semantically related, the other half unrelated. Overall, picture names were not produced faster when the prime word had the same stress as the target than when the prime had different stress, i.e., there was no stress-priming effect in any experiment. This result would not be expected if stress were stored in the lexicon. However, targets with initial stress were responded to faster than final-stress targets. The reason for this effect was neither the quality of the pictures nor frequency of occurrence or voice-key characteristics. We hypothesize here that this stress effect is a genuine encoding effect, i.e., words with stress on the second syllable take longer to be encoded because their stress pattern is irregular with respect to the lexical distribution of bisyllabic stress patterns, even though it can be regular with respect to metrical stress rules in Dutch. The results of the experiments are discussed in the framework of models of phonological encoding.
  • Schiller, N. O., & De Ruiter, J. P. (2004). Some notes on priming, alignment, and self-monitoring [Commentary]. Behavioral and Brain Sciences, 27(2), 208-209. doi:10.1017/S0140525X0441005X.

    Abstract

    Any complete theory of speaking must take the dialogical function of language use into account. Pickering & Garrod (P&G) make some progress on this point. However, we question whether their interactive alignment model is the optimal approach. In this commentary, we specifically criticize (1) their notion of alignment being implemented through priming, and (2) their claim that self-monitoring can occur at all levels of linguistic representation.
  • Schiller, N. O. (2004). The onset effect in word naming. Journal of Memory and Language, 50(4), 477-490. doi:10.1016/j.jml.2004.02.004.

    Abstract

    This study investigates whether or not masked form priming effects in the naming task depend on the number of shared segments between prime and target. Dutch participants named bisyllabic words, which were preceded by visual masked primes. When primes shared the initial segment(s) with the target, naming latencies were shorter than in a control condition (string of percent signs). Onset complexity (singleton vs. complex word onset) did not modulate this priming effect in Dutch. Furthermore, significant priming due to shared final segments was only found when the prime did not contain a mismatching onset, suggesting an interfering role of initial non-target segments. It is concluded that (a) degree of overlap (segmental match vs. mismatch), and (b) position of overlap (initial vs. final) influence the magnitude of the form priming effect in the naming task. A modification of the segmental overlap hypothesis (Schiller, 1998) is proposed to account for the data.
  • Schiller, N., Horemans, I., Ganushchak, L. Y., & Koester, D. (2009). Event-related brain potentials during monitoring of speech errors. NeuroImage, 44, 520-530. doi:10.1016/j.neuroimage.2008.09.019.

    Abstract

    When we perceive speech, our goal is to extract the meaning of the verbal message which includes semantic processing. However, how deeply do we process speech in different situations? In two experiments, native Dutch participants heard spoken sentences describing simultaneously presented pictures. Sentences either correctly described the pictures or contained an anomalous final word (i.e. a semantically or phonologically incongruent word). In the first experiment, spoken sentences were task-irrelevant and both anomalous conditions elicited similar centro-parietal N400s that were larger in amplitude than the N400 for the correct condition. In the second experiment, we ensured that participants processed the same stimuli semantically. In an early time window, we found similar phonological mismatch negativities for both anomalous conditions compared to the correct condition. These negativities were followed by an N400 that was larger for semantic than phonological errors. Together, these data suggest that we process speech semantically, even if the speech is task-irrelevant. Once listeners allocate more cognitive resources to the processing of speech, we suggest that they make predictions for upcoming words, presumably by means of the production system and an internal monitoring loop, to facilitate lexical processing of the perceived speech
  • Schiller, N. O. (1998). The effect of visually masked syllable primes on the naming latencies of words and pictures. Journal of Memory and Language, 39, 484-507. doi:10.1006/jmla.1998.2577.

    Abstract

    To investigate the role of the syllable in Dutch speech production, five experiments were carried out to examine the effect of visually masked syllable primes on the naming latencies for written words and pictures. Targets had clear syllable boundaries and began with a CV syllable (e.g., ka.no) or a CVC syllable (e.g., kak.tus), or had ambiguous syllable boundaries and began with a CV[C] syllable (e.g., ka[pp]er). In the syllable match condition, bisyllabic Dutch nouns or verbs were preceded by primes that were identical to the target’s first syllable. In the syllable mismatch condition, the prime was either shorter or longer than the target’s first syllable. A neutral condition was also included. None of the experiments showed a syllable priming effect. Instead, all related primes facilitated the naming of the targets. It is concluded that the syllable does not play a role in the process of phonological encoding in Dutch. Because the amount of facilitation increased with increasing overlap between prime and target, the priming effect is accounted for by a segmental overlap hypothesis.
  • Schimke, S. (2009). The acquisition of finiteness by Turkish learners of German and Turkish learners of French: Investigating knowledge of forms and functions in production and comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    Sarah Schimke onderzocht hoe mensen die op volwassen leeftijd naar een ander land verhuizen de taal van dit land leren, ook zonder veel taalinstructie te krijgen. Twee groepen werden onderzocht: Turkse immigranten in Frankrijk en Turkse immigranten in Duitsland. De resultaten laten zien dat volwassen leerlingen in het begin van het verwervingsproces een gemakkelijkere variatie van de doeltaal creëren. Er worden wel woorden van de doeltaal verworven en gebruikt, maar er wordt een gesimplificeerde grammatica toegepast. In het bijzonder gebruiken leerlingen in deze fase geen finietheid, dus geen morfologische variaties van werkwoorden. Schimke toont aan dat als finietheid wordt verworven, dit de grammatica van de leerlingen sterk verandert en dat deze veel sterker op de doeltaalgrammatica begint te lijken. Ook toont ze aan dat dit proces door karakteristieken van de doeltaal, zoals de woordvolgorde en de complexiteit van de morfologie, wordt beïnvloed

    Additional information

    full text via Radboud Repository
  • Schimke, S. (2009). Does finiteness mark assertion? A picture selection study with Turkish learners and native speakers of German. In C. Dimroth, & P. Jordens (Eds.), Functional categories in learner language (pp. 169-202). Berlin: Mouton de Gruyter.
  • Schmid, M. S., Berends, S. M., Bergmann, C., Brouwer, S., Meulman, N., Seton, B., Sprenger, S., & Stowe, L. A. (2016). Designing research on bilingual development: Behavioral and neurolinguistic experiments. Berlin: Springer.
  • Schmidt, J., Herzog, D., Scharenborg, O., & Janse, E. (2016). Do hearing aids improve affect perception? Advances in Experimental Medicine and Biology, 894, 47-55. doi:10.1007/978-3-319-25474-6_6.

    Abstract

    Normal-hearing listeners use acoustic cues in speech to interpret a speaker's emotional state. This study investigates the effect of hearing aids on the perception of the emotion dimensions arousal (aroused/calm) and valence (positive/negative attitude) in older adults with hearing loss. More specifically, we investigate whether wearing a hearing aid improves the correlation between affect ratings and affect-related acoustic parameters. To that end, affect ratings by 23 hearing-aid users were compared for aided and unaided listening. Moreover, these ratings were compared to the ratings by an age-matched group of 22 participants with age-normal hearing.For arousal, hearing-aid users rated utterances as generally more aroused in the aided than in the unaided condition. Intensity differences were the strongest indictor of degree of arousal. Among the hearing-aid users, those with poorer hearing used additional prosodic cues (i.e., tempo and pitch) for their arousal ratings, compared to those with relatively good hearing. For valence, pitch was the only acoustic cue that was associated with valence. Neither listening condition nor hearing loss severity (differences among the hearing-aid users) influenced affect ratings or the use of affect-related acoustic parameters. Compared to the normal-hearing reference group, ratings of hearing-aid users in the aided condition did not generally differ in both emotion dimensions. However, hearing-aid users were more sensitive to intensity differences in their arousal ratings than the normal-hearing participants.We conclude that the use of hearing aids is important for the rehabilitation of affect perception and particularly influences the interpretation of arousal
  • Schmidt, J., Janse, E., & Scharenborg, O. (2016). Perception of emotion in conversational speech by younger and older listeners. Frontiers in Psychology, 7: 781. doi:10.3389/fpsyg.2016.00781.

    Abstract

    This study investigated whether age and/or differences in hearing sensitivity influence the perception of the emotion dimensions arousal (calm vs. aroused) and valence (positive vs. negative attitude) in conversational speech. To that end, this study specifically focused on the relationship between participants' ratings of short affective utterances and the utterances' acoustic parameters (pitch, intensity, and articulation rate) known to be associated with the emotion dimensions arousal and valence. Stimuli consisted of short utterances taken from a corpus of conversational speech. In two rating tasks, younger and older adults either rated arousal or valence using a 5-point scale. Mean intensity was found to be the main cue participants used in the arousal task (i.e., higher mean intensity cueing higher levels of arousal) while mean F-0 was the main cue in the valence task (i.e., higher mean F-0 being interpreted as more negative). Even though there were no overall age group differences in arousal or valence ratings, compared to younger adults, older adults responded less strongly to mean intensity differences cueing arousal and responded more strongly to differences in mean F-0 cueing valence. Individual hearing sensitivity among the older adults did not modify the use of mean intensity as an arousal cue. However, individual hearing sensitivity generally affected valence ratings and modified the use of mean F-0. We conclude that age differences in the interpretation of mean F-0 as a cue for valence are likely due to age-related hearing loss, whereas age differences in rating arousal do not seem to be driven by hearing sensitivity differences between age groups (as measured by pure-tone audiometry).
  • Schmiedtová, B. (2004). At the same time.. The expression of simultaneity in learner varieties. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.59569.
  • Schmiedtová, B. (2004). At the same time.. The expression of simultaneity in learner varieties. Berlin: Mouton de Gruyter.

    Abstract

    The study endeavors a detailed and systematic classification of linguistic simultaneity expressions. Further, it aims at a well described survey of how simultaneity is expressed by native speakers in their own language. On the basis of real production data the book answers the questions of how native speakers express temporal simultaneity in general, and how learners at different levels of proficiency deal with this situation under experimental test conditions. Furthermore, the results of this study shed new light on our understanding of aspect in general, and on its acquisition by adult learners.
  • Schmitt, B. M., Schiller, N. O., Rodriguez-Fornells, A., & Münte, T. F. (2004). Elektrophysiologische Studien zum Zeitverlauf von Sprachprozessen. In H. H. Müller, & G. Rickheit (Eds.), Neurokognition der Sprache (pp. 51-70). Tübingen: Stauffenburg.
  • Schoffelen, J.-M., & Gross, J. (2009). Source connectivity analysis with MEG and EEG. Human Brain Mapping, 30, 1857-1865. doi: 10.1002/hbm.20745.

    Abstract

    Interactions between functionally specialized brain regions are crucial for normal brain function. Magnetoencephalography (MEG) and electroencephalography (EEG) are techniques suited to capture these interactions, because they provide whole head measurements of brain activity in the millisecond range. More than one sensor picks up the activity of an underlying source. This field spread severely limits the utility of connectivity measures computed directly between sensor recordings. Consequentially, neuronal interactions should be studied on the level of the reconstructed sources. This article reviews several methods that have been applied to investigate interactions between brain regions in source space. We will mainly focus on the different measures used to quantify connectivity, and on the different strategies adopted to identify regions of interest. Despite various successful accounts of MEG and EEG source connectivity, caution with respect to the interpretation of the results is still warranted. This is due to the fact that effects of field spread can never be completely abolished in source space. However, in this very exciting and developing field of research this cautionary note should not discourage researchers from further investigation into the connectivity between neuronal sources.
  • Schoot, L., Heyselaar, E., Hagoort, P., & Segaert, K. (2016). Does syntactic alignment effectively influence how speakers are perceived by their conversation partner. PLoS One, 11(4): e015352. doi:10.1371/journal.pone.0153521.

    Abstract

    The way we talk can influence how we are perceived by others. Whereas previous studies have started to explore the influence of social goals on syntactic alignment, in the current study, we additionally investigated whether syntactic alignment effectively influences conversation partners’ perception of the speaker. To this end, we developed a novel paradigm in which we can measure the effect of social goals on the strength of syntactic alignment for one participant (primed participant), while simultaneously obtaining usable social opinions about them from their conversation partner (the evaluator). In Study 1, participants’ desire to be rated favorably by their partner was manipulated by assigning pairs to a Control (i.e., primed participants did not know they were being evaluated) or Evaluation context (i.e., primed participants knew they were being evaluated). Surprisingly, results showed no significant difference in the strength with which primed participants aligned their syntactic choices with their partners’ choices. In a follow-up study, we used a Directed Evaluation context (i.e., primed participants knew they were being evaluated and were explicitly instructed to make a positive impression). However, again, there was no evidence supporting the hypothesis that participants’ desire to impress their partner influences syntactic alignment. With respect to the influence of syntactic alignment on perceived likeability by the evaluator, a negative relationship was reported in Study 1: the more primed participants aligned their syntactic choices with their partner, the more that partner decreased their likeability rating after the experiment. However, this effect was not replicated in the Directed Evaluation context of Study 2. In other words, our results do not support the conclusion that speakers’ desire to be liked affects how much they align their syntactic choices with their partner, nor is there convincing evidence that there is a reliable relationship between syntactic alignment and perceived likeability.

    Additional information

    Data availability
  • Schoot, L., Hagoort, P., & Segaert, K. (2016). What can we learn from a two-brain approach to verbal interaction? Neuroscience and Biobehavioral Reviews, 68, 454-459. doi:10.1016/j.neubiorev.2016.06.009.

    Abstract

    Verbal interaction is one of the most frequent social interactions humans encounter on a daily basis. In the current paper, we zoom in on what the multi-brain approach has contributed, and can contribute in the future, to our understanding of the neural mechanisms supporting verbal interaction. Indeed, since verbal interaction can only exist between individuals, it seems intuitive to focus analyses on inter-individual neural markers, i.e. between-brain neural coupling. To date, however, there is a severe lack of theoretically-driven, testable hypotheses about what between-brain neural coupling actually reflects. In this paper, we develop a testable hypothesis in which between-pair variation in between-brain neural coupling is of key importance. Based on theoretical frameworks and empirical data, we argue that the level of between-brain neural coupling reflects speaker-listener alignment at different levels of linguistic and extra-linguistic representation. We discuss the possibility that between-brain neural coupling could inform us about the highest level of inter-speaker alignment: mutual understanding
  • Schuppler, B., Van Dommelen, W., Koreman, J., & Ernestus, M. (2009). Word-final [t]-deletion: An analysis on the segmental and sub-segmental level. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 2275-2278). Causal Productions Pty Ltd.

    Abstract

    This paper presents a study on the reduction of word-final [t]s in conversational standard Dutch. Based on a large amount of tokens annotated on the segmental level, we show that the bigram frequency and the segmental context are the main predictors for the absence of [t]s. In a second study, we present an analysis of the detailed acoustic properties of word-final [t]s and we show that bigram frequency and context also play a role on the subsegmental level. This paper extends research on the realization of /t/ in spontaneous speech and shows the importance of incorporating sub-segmental properties in models of speech.
  • Schuppler, B., van Doremalen, J., Scharenborg, O., Cranen, B., & Boves, L. (2009). Using temporal information for improving articulatory-acoustic feature classification. Automatic Speech Recognition and Understanding, IEEE 2009 Workshop, 70-75. doi:10.1109/ASRU.2009.5373314.

    Abstract

    This paper combines acoustic features with a high temporal and a high frequency resolution to reliably classify articulatory events of short duration, such as bursts in plosives. SVM classification experiments on TIMIT and SVArticulatory showed that articulatory-acoustic features (AFs) based on a combination of MFCCs derived from a long window of 25ms and a short window of 5ms that are both shifted with 2.5ms steps (Both) outperform standard MFCCs derived with a window of 25 ms and a shift of 10 ms (Baseline). Finally, comparison of the TIMIT and SVArticulatory results showed that for classifiers trained on data that allows for asynchronously changing AFs (SVArticulatory) the improvement from Baseline to Both is larger than for classifiers trained on data where AFs change simultaneously with the phone boundaries (TIMIT).
  • Schwichtenberg, B., & Schiller, N. O. (2004). Semantic gender assignment regularities in German. Brain and Language, 90(1-3), 326-337. doi:10.1016/S0093-934X(03)00445-0.

    Abstract

    Gender assignment relates to a native speaker's knowledge of the structure of the gender system of his/her language, allowing the speaker to select the appropriate gender for each noun. Whereas categorical assignment rules and exceptional gender assignment are well investigated, assignment regularities, i.e., tendencies in the gender distribution identified within the vocabulary of a language, are still controversial. The present study is an empirical contribution trying to shed light on the gender assignment system native German speakers have at their disposal. Participants presented with a category (e.g., predator) and a pair of gender-marked pseudowords (e.g., der Trelle vs. die Stisse) preferentially selected the pseudo-word preceded by the gender-marked determiner ‘‘associated’’ with the category (e.g., masculine). This finding suggests that semantic regularities might be part of the gender assignment system of native speakers.
  • Scott, S. K., Sauter, D., & McGettigan, C. (2009). Brain mechanisms for processing perceived emotional vocalizations in humans. In S. M. Brudzynski (Ed.), Handbook of mammalian vocalization: An integrative neuroscience approach (pp. 187-198). London: Academic Press.

    Abstract

    Humans express emotional information in their facial expressions and body movements, as well as in their voice. In this chapter we consider the neural processing of a specific kind of vocal expressions, non-verbal emotional vocalizations e.g. laughs and sobs. We outline evidence, from patient studies and functional imaging studies, for both emotion specific and more general processing of emotional information in the voice. We relate these findings to evidence for both basic and dimensional accounts of the representations of emotion. We describe in detail an fMRI study of positive and negative non-verbal expressions of emotion, which revealed that prefrontal areas involved in the control of oro-facial movements were also sensitive to different kinds of vocal emotional information.
  • Scott, S. K., McGettigan, C., & Eisner, F. (2009). A little more conversation, a little less action: Candidate roles for motor cortex in speech perception. Nature Reviews Neuroscience, 10(4), 295-302. doi:10.1038/nrn2603.

    Abstract

    The motor theory of speech perception assumes that activation of the motor system is essential in the perception of speech. However, deficits in speech perception and comprehension do not arise from damage that is restricted to the motor cortex, few functional imaging studies reveal activity in motor cortex during speech perception, and the motor cortex is strongly activated by many different sound categories. Here, we evaluate alternative roles for the motor cortex in spoken communication and suggest a specific role in sensorimotor processing in conversation. We argue that motor-cortex activation it is essential in joint speech, particularly for the timing of turn-taking.
  • Scott, L. J., Muglia, P., Kong, X. Q., Guan, W., Flickinger, M., Upmanyu, R., Tozzi, F., Li, J. Z., Burmeister, M., Absher, D., Thompson, R. C., Francks, C., Meng, F., Antoniades, A., Southwick, A. M., Schatzberg, A. F., Bunney, W. E., Barchas, J. D., Jones, E. G., Day, R. and 13 moreScott, L. J., Muglia, P., Kong, X. Q., Guan, W., Flickinger, M., Upmanyu, R., Tozzi, F., Li, J. Z., Burmeister, M., Absher, D., Thompson, R. C., Francks, C., Meng, F., Antoniades, A., Southwick, A. M., Schatzberg, A. F., Bunney, W. E., Barchas, J. D., Jones, E. G., Day, R., Matthews, K., McGuffin, P., Strauss, J. S., Kennedy, J. L., Middleton, L., Roses, A. D., Watson, S. J., Vincent, J. B., Myers, R. M., Farmer, A. E., Akil, H., Burns, D. K., & Boehnke, M. (2009). Genome-wide association and meta-analysis of bipolar disorder in individuals of European ancestry. Proceedings of the National Academy of Sciences of the United States of America, 106(18), 7501-7506. doi:10.1073/pnas.0813386106.

    Abstract

    Bipolar disorder (BP) is a disabling and often life-threatening disorder that affects approximately 1% of the population worldwide. To identify genetic variants that increase the risk of BP, we genotyped on the Illumina HumanHap550 Beadchip 2,076 bipolar cases and 1,676 controls of European ancestry from the National Institute of Mental Health Human Genetics Initiative Repository, and the Prechter Repository and samples collected in London, Toronto, and Dundee. We imputed SNP genotypes and tested for SNP-BP association in each sample and then performed meta-analysis across samples. The strongest association P value for this 2-study meta-analysis was 2.4 x 10(-6). We next imputed SNP genotypes and tested for SNP-BP association based on the publicly available Affymetrix 500K genotype data from the Wellcome Trust Case Control Consortium for 1,868 BP cases and a reference set of 12,831 individuals. A 3-study meta-analysis of 3,683 nonoverlapping cases and 14,507 extended controls on >2.3 M genotyped and imputed SNPs resulted in 3 chromosomal regions with association P approximately 10(-7): 1p31.1 (no known genes), 3p21 (>25 known genes), and 5q15 (MCTP1). The most strongly associated nonsynonymous SNP rs1042779 (OR = 1.19, P = 1.8 x 10(-7)) is in the ITIH1 gene on chromosome 3, with other strongly associated nonsynonymous SNPs in GNL3, NEK4, and ITIH3. Thus, these chromosomal regions harbor genes implicated in cell cycle, neurogenesis, neuroplasticity, and neurosignaling. In addition, we replicated the reported ANK3 association results for SNP rs10994336 in the nonoverlapping GSK sample (OR = 1.37, P = 0.042). Although these results are promising, analysis of additional samples will be required to confirm that variant(s) in these regions influence BP risk.

    Additional information

    Supp_Inform_Scott_et_al.pdf
  • Scott, S., & Sauter, D. (2004). Vocal expressions of emotion and positive and negative basic emotions [Abstract]. Proceedings of the British Psychological Society, 12, 156.

    Abstract

    Previous studies have indicated that vocal and facial expressions of the ‘basic’ emotions share aspects of processing. Thus amygdala damage compromises the perception of fear and anger from the face and from the voice. In the current study we tested the hypothesis that there exist positive basic emotions, expressed mainly in the voice (Ekman, 1992). Vocal stimuli were produced to express the specific positive emotions of amusement, achievement, pleasure, contentment and relief.
  • Segaert, K., Nygård, G. E., & Wagemans, J. (2009). Identification of everyday objects on the basis of kinetic contours. Vision Research, 49(4), 417-428. doi:10.1016/j.visres.2008.11.012.

    Abstract

    Using kinetic contours derived from everyday objects, we investigated how motion affects object identification. In order not to be distinguishable when static, kinetic contours were made from random dot displays consisting of two regions, inside and outside the object contour. In Experiment 1, the dots were moving in only one of two regions. The objects were identified nearly equally well as soon as the dots either in the figure or in the background started to move. RTs decreased with increasing motion coherence levels and were shorter for complex, less compact objects than for simple, more compact objects. In Experiment 2, objects could be identified when the dots were moving both in the figure and in the background with speed and direction differences between the two. A linear increase in either the speed difference or the direction difference caused a linear decrease in RT for correct identification. In addition, the combination of speed and motion differences appeared to be super-additive.
  • Segaert, K., Wheeldon, L., & Hagoort, P. (2016). Unifying structural priming effects on syntactic choices and timing of sentence generation. Journal of Memory and Language, 91, 59-80. doi:10.1016/j.jml.2016.03.011.

    Abstract

    We investigated whether structural priming of production latencies is sensitive to the same factors known to influence persistence of structural choices: structure preference, cumulativity and verb repetition. In two experiments, we found structural persistence only for passives (inverse preference effect) while priming effects on latencies were stronger for the actives (positive preference effect). We found structural persistence for passives to be influenced by immediate primes and long lasting cumulativity (all preceding primes) (Experiment 1), and to be boosted by verb repetition (Experiment 2). In latencies we found effects for actives were sensitive to long lasting cumulativity (Experiment 1). In Experiment 2, in latencies we found priming for actives overall, while for passives the priming effects emerged as the cumulative exposure increased but only when also aided by verb repetition. These findings are consistent with the Two-stage Competition model, an integrated model of structural priming effects for sentence choice and latency
  • Seidl, A., Cristia, A., Bernard, A., & Onishi, K. H. (2009). Allophonic and phonemic contrasts in infants' learning of sound patterns. Language Learning and Development, 5, 191-202. doi:10.1080/15475440902754326.

    Abstract

    French-learning 11-month-old and English-learning 11- and 4-month-old infants were familiarized with consonant–vowel–consonant syllables in which the final consonants were dependent on whether the preceding vowel was oral or nasal. Oral and nasal vowels are present in the ambient language of all participants, but vowel nasality is phonemic (contrastive) in French and allophonic (noncontrastive) in English. After familiarization, infants heard novel syllables that either followed or violated the familiarized patterns. French-learning 11-month-olds and English-learning 4-month-olds displayed a reliable pattern of preference demonstrating learning and generalization of the patterns, while English-learning 11-month-olds oriented equally to syllables following and violating the familiarized patterns. The results are consistent with an experience-driven reduction of attention to allophonic contrasts by as early as 11 months, which influences phonotactic learning.
  • Sekine, K. (2009). Changes in frame of reference use across the preschool years: A longitudinal study of the gestures and speech produced during route descriptions. Language and Cognitive Processes, 24(2), 218-238. doi:10.1080/01690960801941327.

    Abstract

    This study longitudinally investigated developmental changes in the frame of reference used by children in their gestures and speech. Fifteen children, between 4 and 6 years of age, were asked once a year to describe their route home from their nursery school. When the children were 4 years old, they tended to produce gestures that directly and continuously indicated their actual route in a large gesture space. In contrast, as 6-year-olds, their gestures were segmented and did not match the actual route. Instead, at age 6, the children seemed to create a virtual space in front of themselves to symbolically describe their route. These results indicate that the use of frames of reference develops across the preschool years, shifting from an actual environmental to an abstract environmental frame of reference. Factors underlying the development of frame of reference, including verbal encoding skills and experience, are discussed.
  • Selten, M., Meyer, F., Ba, W., Valles, A., Maas, D., Negwer, M., Eijsink, V. D., van Vugt, R. W. M., van Hulten, J. A., van Bakel, N. H. M., Roosen, J., van der Linden, R., Schubert, D., Verheij, M. M. M., Kasri, N. N., & Martens, G. J. M. (2016). Increased GABAB receptor signaling in a rat model for schizophrenia. Scientific Reports, 6: 34240. doi:10.1038/srep34240.

    Abstract

    Schizophrenia is a complex disorder that affects cognitive function and has been linked, both in patients and animal models, to dysfunction of the GABAergic system. However, the pathophysiological consequences of this dysfunction are not well understood. Here, we examined the GABAergic system in an animal model displaying schizophrenia-relevant features, the apomorphine-susceptible (APO-SUS) rat and its phenotypic counterpart, the apomorphine-unsusceptible (APO-UNSUS) rat at postnatal day 20-22. We found changes in the expression of the GABA-synthesizing enzyme GAD67 specifically in the prelimbic-but not the infralimbic region of the medial prefrontal cortex (mPFC), indicative of reduced inhibitory function in this region in APO-SUS rats. While we did not observe changes in basal synaptic transmission onto LII/III pyramidal cells in the mPFC of APO-SUS compared to APO-UNSUS rats, we report reduced paired-pulse ratios at longer inter-stimulus intervals. The GABA(B) receptor antagonist CGP 55845 abolished this reduction, indicating that the decreased paired-pulse ratio was caused by increased GABA(B) signaling. Consistently, we find an increased expression of the GABA(B1) receptor subunit in APO-SUS rats. Our data provide physiological evidence for increased presynaptic GABAB signaling in the mPFC of APO-SUS rats, further supporting an important role for the GABAergic system in the pathophysiology of schizophrenia.
  • Senft, G. (2004). Sprache, Kognition und Konzepte des Raumes in verschiedenen Kulturen - Zum Problem der Interdependenz sprachlicher und mentaler Strukturen. In L. Jäger (Ed.), Medialität und Mentalität (pp. 163-176). Paderborn: Wilhelm Fink.
  • Senft, G. (2004). What do we really know about serial verb constructions in Austronesian and Papuan languages? In I. Bril, & F. Ozanne-Rivierre (Eds.), Complex predicates in Oceanic languages (pp. 49-64). Berlin: Mouton de Gruyter.
  • Senft, G. (2004). Wosi tauwau topaisewa - songs about migrant workers from the Trobriand Islands. In A. Graumann (Ed.), Towards a dynamic theory of language. Festschrift for Wolfgang Wildgen on occasion of his 60th birthday (pp. 229-241). Bochum: Universitätsverlag Dr. N. Brockmeyer.
  • Senft, G. (1998). Body and mind in the Trobriand Islands. Ethos, 26, 73-104. doi:10.1525/eth.1998.26.1.73.

    Abstract

    This article discusses how the Trobriand Islanders speak about body and mind. It addresses the following questions: do the linguistic datafit into theories about lexical universals of body-part terminology? Can we make inferences about the Trobrianders' conceptualization of psychological and physical states on the basis of these data? If a Trobriand Islander sees these idioms as external manifestations of inner states, then can we interpret them as a kind of ethnopsychological theory about the body and its role for emotions, knowledge, thought, memory, and so on? Can these idioms be understood as representation of Trobriand ethnopsychological theory?
  • Senft, G. (2009). Bronislaw Kasper Malinowski. In G. Senft, J.-O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 210-225). Amsterdam: John Benjamins.
  • Senft, G., Östman, J.-O., & Verschueren, J. (Eds.). (2009). Culture and language use. Amsterdam: John Benjamins.
  • Senft, G. (2009). Elicitation. In G. Senft, J.-O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 105-109). Amsterdam: John Benjamins.
  • Senft, G. (1985). Emic or etic or just another catch 22? A repartee to Hartmut Haberland. Journal of Pragmatics, 9, 845.
  • Senft, G. (2016). "Masawa - bogeokwa si tuta!": Cultural and cognitive implications of the Trobriand Islanders' gradual loss of their knowledge of how to make a masawa canoe. In P. Meusburger, T. Freytag, & L. Suarsana (Eds.), Ethnic and Cultural Dimensions of Knowledge (pp. 229-256). Heidelberg: Springer Verlag.

    Abstract

    This paper describes how the Trobriand Islanders of Papua New Guinea used to construct their big seagoing masawa canoes and how they used to make their sails, what forms of different knowledge and expertise they needed to do this during various stages of the construction processes, how this knowledge was socially distributed, and the social implications of all the joint communal activities that were necessary until a new canoe could be launched. Then it tries to answer the question why the complex distributed knowledge of how to make a masawa has been gradually getting lost in most of the village communities on the Trobriand Islands; and finally it outlines and discusses the implications of this loss for the Trobriand Islanders' culture, for their social construction of reality, and for their indigenous cognitive capacities.
  • Senft, G. (1998). 'Noble Savages' and the 'Islands of Love': Trobriand Islanders in 'Popular Publications'. In J. Wassmann (Ed.), Pacific answers to Western hegemony: Cultural practices of identity construction (pp. 119-140). Oxford: Berg Publishers.
  • Senft, G. (1998). [Review of the book Anthropological linguistics: An introduction by William A. Foley]. Linguistics, 36, 995-1001.
  • Senft, G. (2009). [Review of the book Geschichten und Gesänge von der Insel Nias in Indonesien ed. by Johannes Maria Hämmerle]. Rundbrief - Forum für Mitglieder des Pazifik-Netzwerkes e.V., 78/09, 29-31.
  • Senft, G. (Ed.). (2004). Deixis and Demonstratives in Oceanic Languages. Canberra: Pacific Linguistics.

    Abstract

    When we communicate, we communicate in a certain context, and this context shapes our utterances. Natural languages are context-bound and deixis 'concerns the ways in which languages encode or grammaticalise features of the context of utterance or speech event, and thus also concerns ways in which the interpretation of utterances depends on the analysis of that context of utterance' (Stephen Levinson). The systems of deixis and demonstratives in the Oceanic languages represented in the contributions to this volume illustrate the fascinating complexity of spatial reference in these languages. Some of the studies presented here highlight social aspects of deictic reference illustrating de Leon's point that 'reference is a collaborative task' . It is hoped that this anthology will contribute to a better understanding of this area and provoke further studies in this extremely interesting, though still rather underdeveloped, research area.
  • Senft, G. (2004). Aspects of spatial deixis in Kilivila. In G. Senft (Ed.), Deixis and demonstratives in Oceanic languages (pp. 59-80). Canberra: Pacific Linguistics.
  • Senft, G. (2004). [Review of the book Serial verbs in Oceanic: A descriptive typology by Terry Crowley]. Linguistics, 42(4), 855-859. doi:10.1515/ling.2004.028, 08/06/2004.
  • Senft, G. (2004). [Review of the book The Oceanic Languages by John Lynch, Malcolm Ross and Terry Crowley]. Linguistics, 42(2), 515-520. doi:10.1515/ling.2004.016.
  • Senft, G. (2004). Introduction. In G. Senft (Ed.), Deixis and demonstratives in Oceanic languages (pp. 1-13). Canberra: Pacific Linguistics.
  • Senft, G. (2009). Fieldwork. In G. Senft, J.-O. Östman, & J. Verschueren (Eds.), Culture and language use (pp. 131-139). Amsterdam: John Benjamins.
  • Senft, G. (1985). How to tell - and understand - a 'dirty' joke in Kilivila. Journal of Pragmatics, 9, 815-834.

Share this page