Publications

Displaying 701 - 800 of 1009
  • Raviv, L., & Arnon, I. (2018). The developmental trajectory of children’s auditory and visual statistical learning abilities: Modality-based differences in the effect of age. Developmental Science, 21(4): e12593. doi:10.1111/desc.12593.

    Abstract

    Infants, children and adults are capable of extracting recurring patterns from their environment through statistical learning (SL), an implicit learning mechanism that is considered to have an important role in language acquisition. Research over the past 20 years has shown that SL is present from very early infancy and found in a variety of tasks and across modalities (e.g., auditory, visual), raising questions on the domain generality of SL. However, while SL is well established for infants and adults, only little is known about its developmental trajectory during childhood, leaving two important questions unanswered: (1) Is SL an early-maturing capacity that is fully developed in infancy, or does it improve with age like other cognitive capacities (e.g., memory)? and (2) Will SL have similar developmental trajectories across modalities? Only few studies have looked at SL across development, with conflicting results: some find age-related improvements while others do not. Importantly, no study to date has examined auditory SL across childhood, nor compared it to visual SL to see if there are modality-based differences in the developmental trajectory of SL abilities. We addressed these issues by conducting a large-scale study of children's performance on matching auditory and visual SL tasks across a wide age range (5–12y). Results show modality-based differences in the development of SL abilities: while children's learning in the visual domain improved with age, learning in the auditory domain did not change in the tested age range. We examine these findings in light of previous studies and discuss their implications for modality-based differences in SL and for the role of auditory SL in language acquisition. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=3kg35hoF0pw.

    Additional information

    Video abstract of the article
  • Redl, T., Eerland, A., & Sanders, T. J. M. (2018). The processing of the Dutch masculine generic zijn ‘his’ across stereotype contexts: An eye-tracking study. PLoS One, 13(10): e0205903. doi:10.1371/journal.pone.0205903.

    Abstract

    Language users often infer a person’s gender when it is not explicitly mentioned. This information is included in the mental model of the described situation, giving rise to expectations regarding the continuation of the discourse. Such gender inferences can be based on two types of information: gender stereotypes (e.g., nurses are female) and masculine generics, which are grammatically masculine word forms that are used to refer to all genders in certain contexts (e.g., To each his own). In this eye-tracking experiment (N = 82), which is the first to systematically investigate the online processing of masculine generic pronouns, we tested whether the frequently used Dutch masculine generic zijn ‘his’ leads to a male bias. In addition, we tested the effect of context by introducing male, female, and neutral stereotypes. We found no evidence for the hypothesis that the generically-intended masculine pronoun zijn ‘his’ results in a male bias. However, we found an effect of stereotype context. After introducing a female stereotype, reading about a man led to an increase in processing time. However, the reverse did not hold, which parallels the finding in social psychology that men are penalized more for gender-nonconforming behavior. This suggests that language processing is not only affected by the strength of stereotype contexts; the associated disapproval of violating these gender stereotypes affects language processing, too.

    Additional information

    pone.0205903.s001.pdf data files
  • Reesink, G. (2010). The Manambu language of East Sepik, Papua New Guinea [Book review]. Studies in Language, 34(1), 226-233. doi:10.1075/sl.34.1.13ree.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2010). Early use of phonetic information in spoken word recognition: Lexical stress drives eye movements immediately. Quarterly Journal of Experimental Psychology, 63(4), 772-783. doi:10.1080/17470210903104412.

    Abstract

    For optimal word recognition listeners should use all relevant acoustic information as soon as it comes available. Using printed-word eye-tracking we investigated when during word processing Dutch listeners use suprasegmental lexical stress information to recognize words. Fixations on targets such as 'OCtopus' (capitals indicate stress) were more frequent than fixations on segmentally overlapping but differently stressed competitors ('okTOber') before segmental information could disambiguate the words. Furthermore, prior to segmental disambiguation, initially stressed words were stronger lexical competitors than non-initially stressed words. Listeners recognize words by immediately using all relevant information in the speech signal.
  • Rietbergen, M., Roelofs, A., Den Ouden, H., & Cools, R. (2018). Disentangling cognitive from motor control: Influence of response modality on updating, inhibiting, and shifting. Acta Psychologica, 191, 124-130. doi:10.1016/j.actpsy.2018.09.008.

    Abstract

    It is unclear whether cognitive and motor control are parallel and interactive or serial and independent processes. According to one view, cognitive control refers to a set of modality-nonspecific processes that act on supramodal representations and precede response modality-specific motor processes. An alternative view is that cognitive control represents a set of modality-specific operations that act directly on motor-related representations, implying dependence of cognitive control on motor control. Here, we examined the influence of response modality (vocal vs. manual) on three well-established subcomponent processes of cognitive control: shifting, inhibiting, and updating. We observed effects of all subcomponent processes in reaction times. The magnitude of these effects did not differ between response modalities for shifting and inhibiting, in line with a serial, supramodal view. However, the magnitude of the updating effect differed between modalities, in line with an interactive, modality-specific view. These results suggest that updating represents a modality-specific operation that depends on motor control, whereas shifting and inhibiting represent supramodal operations that act independently of motor control.
  • Rietveld, T., Van Hout, R., & Ernestus, M. (2004). Pitfalls in corpus research. Computers and the Humanities, 38(4), 343-362. doi:10.1007/s10579-004-1919-1.

    Abstract

    This paper discusses some pitfalls in corpus research and suggests solutions on the basis of examples and computer simulations. We first address reliability problems in language transcriptions, agreement between transcribers, and how disagreements can be dealt with. We then show that the frequencies of occurrence obtained from a corpus cannot always be analyzed with the traditional X2 test, as corpus data are often not sequentially independent and unit independent. Next, we stress the relevance of the power of statistical tests, and the sizes of statistically significant effects. Finally, we point out that a t-test based on log odds often provides a better alternative to a X2 analysis based on frequency counts.
  • Ringersma, J., Kastens, K., Tschida, U., & Van Berkum, J. J. A. (2010). A principled approach to online publication listings and scientific resource sharing. The Code4Lib Journal, 2010(9), 2520.

    Abstract

    The Max Planck Institute (MPI) for Psycholinguistics has developed a service to manage and present the scholarly output of their researchers. The PubMan database manages publication metadata and full-texts of publications published by their scholars. All relevant information regarding a researcher’s work is brought together in this database, including supplementary materials and links to the MPI database for primary research data. The PubMan metadata is harvested into the MPI website CMS (Plone). The system developed for the creation of the publication lists, allows the researcher to create a selection of the harvested data in a variety of formats.
  • Ringersma, J., Zinn, C., & Koenig, A. (2010). Eureka! User friendly access to the MPI linguistic data archive. SDV - Sprache und Datenverarbeitung/International Journal for Language Data Processing. [Special issue on Usability aspects of hypermedia systems], 34(1), 67-79.

    Abstract

    The MPI archive hosts a rich and diverse set of linguistic resources, containing some 300.000 audio, video and text resources, which are described by some 100.000 metadata files. New data is ingested on a daily basis, and there is an increasing need to facilitate easy access to both expert and novice users. In this paper, we describe various tools that help users to view all archived content: the IMDI Browser, providing metadata-based access through structured tree navigation and search; a facetted browser where users select from a few distinctive metadata fields (facets) to find the resource(s) in need; a Google Earth overlay where resources can be located via geographic reference; purpose-built web portals giving pre-fabricated access to a well-defined part of the archive; lexicon-based entry points to parts of the archive where browsing a lexicon gives access to non-linguistic material; and finally, an ontology-based approach where lexical spaces are complemented with conceptual ones to give a more structured extra-linguistic view of the languages and cultures its helps documenting.
  • Ringersma, J., & Kemps-Snijders, M. (2010). Reaction to the LEXUS review in the LD&C, Vol.3, No 2. Language Documentation & Conservation, 4(2), 75-77. Retrieved from http://hdl.handle.net/10125/4469.

    Abstract

    This technology review gives an overview of LEXUS, the MPI online lexicon tool and its new functionalities. It is a reaction to a review of Kristina Kotcheva in Language Documentation and Conservation 3(2).
  • Roberts, L., Gullberg, M., & Indefrey, P. (2008). Online pronoun resolution in L2 discourse: L1 influence and general learner effects. Studies in Second Language Acquisition, 30(3), 333-357. doi:10.1017/S0272263108080480.

    Abstract

    This study investigates whether advanced second language (L2) learners of a nonnull subject language (Dutch) are influenced by their null subject first language (L1) (Turkish) in their offline and online resolution of subject pronouns in L2 discourse. To tease apart potential L1 effects from possible general L2 processing effects, we also tested a group of German L2 learners of Dutch who were predicted to perform like the native Dutch speakers. The two L2 groups differed in their offline interpretations of subject pronouns. The Turkish L2 learners exhibited a L1 influence, because approximately half the time they interpreted Dutch subject pronouns as they would overt pronouns in Turkish, whereas the German L2 learners performed like the Dutch controls, interpreting pronouns as coreferential with the current discourse topic. This L1 effect was not in evidence in eye-tracking data, however. Instead, the L2 learners patterned together, showing an online processing disadvantage when two potential antecedents for the pronoun were grammatically available in the discourse. This processing disadvantage was in evidence irrespective of the properties of the learners' L1 or their final interpretation of the pronoun. Therefore, the results of this study indicate both an effect of the L1 on the L2 in offline resolution and a general L2 processing effect in online subject pronoun resolution.
  • Roberts, L. (2008). Processing temporal constraints and some implications for the investigation of second language sentence processing and acquisition. Commentary on Baggio. Language Learning, 58(suppl. 1), 57-61. doi:10.1111/j.1467-9922.2008.00461.x.
  • Roby, A. C., & Kidd, E. (2008). The referential communication skills of children with imaginary companions. Developmental Science, 11(4), 531-40. doi:10.1111/j.1467-7687.2008.00699.x.

    Abstract

    he present study investigated the referential communication skills of children with imaginary companions (ICs). Twenty-two children with ICs aged between 4 and 6 years were compared to 22 children without ICs (NICs). The children were matched for age, gender, birth order, number of siblings, and parental education. All children completed the Test of Referential Commu- nication (Camaioni, Ercolani & Lloyd, 1995). The results showed that the children with ICs performed better than the children without ICs on the speaker component of the task. In particular, the IC children were better able to identify a specific referen t to their interlocutor than were the NIC children. Furthermore, the IC children described less redundant features of the target picture than did the NIC children. The children did not differ in the listening comprehension component of the task. Overall, the results suggest that the IC children had a better understanding of their interlocutor’s information requirements in convers ation. The role of pretend play in the development of communicative competence is discussed in light of these results.
  • Rodenas-Cuadrado, P., Mengede, J., Baas, L., Devanna, P., Schmid, T. A., Yartsev, M., Firzlaff, U., & Vernes, S. C. (2018). Mapping the distribution of language related genes FoxP1, FoxP2 and CntnaP2 in the brains of vocal learning bat species. Journal of Comparative Neurology, 526(8), 1235-1266. doi:10.1002/cne.24385.

    Abstract

    Genes including FOXP2, FOXP1 and CNTNAP2, have been implicated in human speech and language phenotypes, pointing to a role in the development of normal language-related circuitry in the brain. Although speech and language are unique human phenotypes, a comparative approach is possible by addressing language-relevant traits in animal model systems. One such trait, vocal learning, represents an essential component of human spoken language, and is shared by cetaceans, pinnipeds, elephants, some birds and bats. Given their vocal learning abilities, gregarious nature, and reliance on vocalisations for social communication and navigation, bats represent an intriguing mammalian system in which to explore language-relevant genes. We used immunohistochemistry to detail the distribution of FoxP2, FoxP1 and Cntnap2 proteins, accompanied by detailed cytoarchitectural histology in the brains of two vocal learning bat species; Phyllostomus discolor and Rousettus aegyptiacus. We show widespread expression of these genes, similar to what has been previously observed in other species, including humans. A striking difference was observed in the adult Phyllostomus discolor bat, which showed low levels of FoxP2 expression in the cortex, contrasting with patterns found in rodents and non-human primates. We created an online, open-access database within which all data can be browsed, searched, and high resolution images viewed to single cell resolution. The data presented herein reveal regions of interest in the bat brain and provide new opportunities to address the role of these language-related genes in complex vocal-motor and vocal learning behaviours in a mammalian model system.
  • Roelofs, A. (2004). Seriality of phonological encoding in naming objects and reading their names. Memory & Cognition, 32(2), 212-222.

    Abstract

    There is a remarkable lack of research bringing together the literatures on oral reading and speaking.
    As concerns phonological encoding, both models of reading and speaking assume a process of segmental
    spellout for words, which is followed by serial prosodification in models of speaking (e.g., Levelt,
    Roelofs, & Meyer, 1999). Thus, a natural place to merge models of reading and speaking would be
    at the level of segmental spellout. This view predicts similar seriality effects in reading and object naming.
    Experiment 1 showed that the seriality of encoding inside a syllable revealed in previous studies
    of speaking is observed for both naming objects and reading their names. Experiment 2 showed that
    both object naming and reading exhibit the seriality of the encoding of successive syllables previously
    observed for speaking. Experiment 3 showed that the seriality is also observed when object naming and
    reading trials are mixed rather than tested separately, as in the first two experiments. These results suggest
    that a serial phonological encoding mechanism is shared between naming objects and reading
    their names.
  • Roelofs, A., Meyer, A. S., & Levelt, W. J. M. (1998). A case for the lemma/lexeme distinction in models of speaking: Comment on Caramazza and Miozzo (1997). Cognition, 69(2), 219-230. doi:10.1016/S0010-0277(98)00056-0.

    Abstract

    In a recent series of papers, Caramazza and Miozzo [Caramazza, A., 1997. How many levels of processing are there in lexical access? Cognitive Neuropsychology 14, 177-208; Caramazza, A., Miozzo, M., 1997. The relation between syntactic and phonological knowledge in lexical access: evidence from the 'tip-of-the-tongue' phenomenon. Cognition 64, 309-343; Miozzo, M., Caramazza, A., 1997. On knowing the auxiliary of a verb that cannot be named: evidence for the independence of grammatical and phonological aspects of lexical knowledge. Journal of Cognitive Neuropsychology 9, 160-166] argued against the lemma/lexeme distinction made in many models of lexical access in speaking, including our network model [Roelofs, A., 1992. A spreading-activation theory of lemma retrieval in speaking. Cognition 42, 107-142; Levelt, W.J.M., Roelofs, A., Meyer, A.S., 1998. A theory of lexical access in speech production. Behavioral and Brain Sciences, (in press)]. Their case was based on the observations that grammatical class deficits of brain-damaged patients and semantic errors may be restricted to either spoken or written forms and that the grammatical gender of a word and information about its form can be independently available in tip-of-the-tongue stales (TOTs). In this paper, we argue that though our model is about speaking, not taking position on writing, extensions to writing are possible that are compatible with the evidence from aphasia and speech errors. Furthermore, our model does not predict a dependency between gender and form retrieval in TOTs. Finally, we argue that Caramazza and Miozzo have not accounted for important parts of the evidence motivating the lemma/lexeme distinction, such as word frequency effects in homophone production, the strict ordering of gender and pho neme access in LRP data, and the chronometric and speech error evidence for the production of complex morphology.
  • Roelofs, A. (2004). Error biases in spoken word planning and monitoring by aphasic and nonaphasic speakers: Comment on Rapp and Goldrick,2000. Psychological Review, 111(2), 561-572. doi:10.1037/0033-295X.111.2.561.

    Abstract

    B. Rapp and M. Goldrick (2000) claimed that the lexical and mixed error biases in picture naming by
    aphasic and nonaphasic speakers argue against models that assume a feedforward-only relationship
    between lexical items and their sounds in spoken word production. The author contests this claim by
    showing that a feedforward-only model like WEAVER ++ (W. J. M. Levelt, A. Roelofs, & A. S. Meyer,
    1999b) exhibits the error biases in word planning and self-monitoring. Furthermore, it is argued that
    extant feedback accounts of the error biases and relevant chronometric effects are incompatible.
    WEAVER ++ simulations with self-monitoring revealed that this model accounts for the chronometric
    data, the error biases, and the influence of the impairment locus in aphasic speakers.
  • Roelofs, A. (2004). Comprehension-based versus production-internal feedback in planning spoken words: A rejoinder to Rapp and Goldrick, 2004. Psychological Review, 111(2), 579-580. doi:10.1037/0033-295X.111.2.579.

    Abstract

    WEAVER++ has no backward links in its form-production network and yet is able to explain the lexical
    and mixed error biases and the mixed distractor latency effect. This refutes the claim of B. Rapp and M.
    Goldrick (2000) that these findings specifically support production-internal feedback. Whether their restricted interaction account model can also provide a unified account of the error biases and latency effect remains to be shown.
  • Roelofs, A., & Meyer, A. S. (1998). Metrical structure in planning the production of spoken words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 922-939. doi:10.1037/0278-7393.24.4.922.

    Abstract

    According to most models of speech production, the planning of spoken words involves the independent retrieval of segments and metrical frames followed by segment-to-frame association. In some models, the metrical frame includes a specification of the number and ordering of consonants and vowels, but in the word-form encoding by activation and verification (WEAVER) model (A. Roelofs, 1997), the frame specifies only the stress pattern across syllables. In 6 implicit priming experiments, on each trial, participants produced 1 word out of a small set as quickly as possible. In homogeneous sets, the response words shared word-initial segments, whereas in heterogeneous sets, they did not. Priming effects from shared segments depended on all response words having the same number of syllables and stress pattern, but not on their having the same number of consonants and vowels. No priming occurred when the response words had only the same metrical frame but shared no segments. Computer simulations demonstrated that WEAVER accounts for the findings.
  • Roelofs, A. (1998). Rightward incrementality in encoding simple phrasal forms in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 904-921. doi:10.1037/0278-7393.24.4.904.

    Abstract

    This article reports 7 experiments investigating whether utterances are planned in a parallel or rightward incremental fashion during language production. The experiments examined the role of linear order, length, frequency, and repetition in producing Dutch verb–particle combinations. On each trial, participants produced 1 utterance out of a set of 3 as quickly as possible. The responses shared part of their form or not. For particle-initial infinitives, facilitation was obtained when the responses shared the particle but not when they shared the verb. For verb-initial imperatives, however, facilitation was obtained for the verbs but not for the particles. The facilitation increased with length, decreased with frequency, and was independent of repetition. A simple rightward incremental model accounts quantitatively for the results.
  • Roelofs, A. (1997). The WEAVER model of word-form encoding in speech production. Cognition, 64, 249-284. doi:10.1016/S0010-0277(97)00027-9.

    Abstract

    Lexical access in speaking consists of two major steps: lemma retrieval and word-form encoding. In Roelofs (Roelofs, A. 1992a. Cognition 42. 107-142; Roelofs. A. 1993. Cognition 47, 59-87.), I described a model of lemma retrieval. The present paper extends this work by presenting a comprehensive model of the second access step, word-form encoding. The model is called WEAVER (Word-form Encoding by Activation and VERification). Unlike other models of word-form generation, WEAVER is able to provide accounts of response time data, particularly from the picture-word interference paradigm and the implicit priming paradigm. Its key features are (1) retrieval by spreading activation, (2) verification of activated information by a production rule, (3) a rightward incremental construction of phonological representations using a principle of active syllabification, syllables are constructed on the fly rather than stored with lexical items, (4) active competitive selection of syllabic motor programs using a mathematical formalism that generates response times and (5) the association of phonological speech errors with the selection of syllabic motor programs due to the failure of verification.
  • Roll, P., Vernes, S. C., Bruneau, N., Cillario, J., Ponsole-Lenfant, M., Massacrier, A., Rudolf, G., Khalife, M., Hirsch, E., Fisher, S. E., & Szepetowski, P. (2010). Molecular networks implicated in speech-related disorders: FOXP2 regulates the SRPX2/uPAR complex. Human Molecular Genetics, 19, 4848-4860. doi:10.1093/hmg/ddq415.

    Abstract

    It is a challenge to identify the molecular networks contributing to the neural basis of human speech. Mutations in transcription factor FOXP2 cause difficulties mastering fluent speech (developmental verbal dyspraxia, DVD), while mutations of sushi-repeat protein SRPX2 lead to epilepsy of the rolandic (sylvian) speech areas, with DVD or with bilateral perisylvian polymicrogyria. Pathophysiological mechanisms driven by SRPX2 involve modified interaction with the plasminogen activator receptor (uPAR). Independent chromatin-immunoprecipitation microarray screening has identified the uPAR gene promoter as a potential target site bound by FOXP2. Here, we directly tested for the existence of a transcriptional regulatory network between human FOXP2 and the SRPX2/uPAR complex. In silico searches followed by gel retardation assays identified specific efficient FOXP2 binding sites in each of the promoter regions of SRPX2 and uPAR. In FOXP2-transfected cells, significant decreases were observed in the amounts of both SRPX2 (43.6%) and uPAR (38.6%) native transcripts. Luciferase reporter assays demonstrated that FOXP2 expression yielded marked inhibition of SRPX2 (80.2%) and uPAR (77.5%) promoter activity. A mutant FOXP2 that causes DVD (p.R553H) failed to bind to SRPX2 and uPAR target sites, and showed impaired down-regulation of SRPX2 and uPAR promoter activity. In a patient with polymicrogyria of the left rolandic operculum, a novel FOXP2 mutation (p.M406T) was found in the leucine-zipper (dimerization) domain. p.M406T partially impaired FOXP2 regulation of SRPX2 promoter activity, while that of the uPAR promoter remained unchanged. Together with recently described FOXP2-CNTNPA2 and SRPX2/uPAR links, the FOXP2-SRPX2/uPAR network provides exciting insights into molecular pathways underlying speech-related disorders.

    Additional information

    Roll_et_al_2010_Suppl_Material.doc
  • Rommers, J., & Federmeier, K. D. (2018). Lingering expectations: A pseudo-repetition effect for words previously expected but not presented. NeuroImage, 183, 263-272. doi:10.1016/j.neuroimage.2018.08.023.

    Abstract

    Prediction can help support rapid language processing. However, it is unclear whether prediction has downstream
    consequences, beyond processing in the moment. In particular, when a prediction is disconfirmed, does it linger,
    or is it suppressed? This study manipulated whether words were actually seen or were only expected, and probed
    their fate in memory by presenting the words (again) a few sentences later. If disconfirmed predictions linger,
    subsequent processing of the previously expected (but never presented) word should be similar to actual word
    repetition. At initial presentation, electrophysiological signatures of prediction disconfirmation demonstrated that
    participants had formed expectations. Further downstream, relative to unseen words, repeated words elicited a
    strong N400 decrease, an enhanced late positive complex (LPC), and late alpha band power decreases. Critically,
    like repeated words, words previously expected but not presented also attenuated the N400. This “pseudorepetition
    effect” suggests that disconfirmed predictions can linger at some stages of processing, and demonstrates
    that prediction has downstream consequences beyond rapid on-line processing
  • Rommers, J., & Federmeier, K. D. (2018). Predictability's aftermath: Downstream consequences of word predictability as revealed by repetition effects. Cortex, 101, 16-30. doi:10.1016/j.cortex.2017.12.018.

    Abstract

    Stimulus processing in language and beyond is shaped by context, with predictability having a
    particularly well-attested influence on the rapid processes that unfold during the presentation
    of a word. But does predictability also have downstream consequences for the quality of the
    constructed representations? On the one hand, the ease of processing predictablewordsmight
    free up time or cognitive resources, allowing for relatively thorough processing of the input. On
    the other hand, predictabilitymight allowthe systemto run in a top-down “verificationmode”,
    at the expense of thorough stimulus processing. This electroencephalogram (EEG) study
    manipulated word predictability, which reduced N400 amplitude and inter-trial phase clustering
    (ITPC), and then probed the fate of the (un)predictable words in memory by presenting
    them again. More thorough processing of predictable words should increase repetition effects,
    whereas less thorough processing should decrease them. Repetition was reflected in N400 decreases,
    late positive complex (LPC) enhancements, and late alpha/beta band power decreases.
    Critically, prior predictability tended to reduce the repetition effect on the N400, suggesting less
    priming, and eliminated the repetition effect on the LPC, suggesting a lack of episodic recollection.
    These findings converge on a top-down verification account, on which the brain processes
    more predictable input less thoroughly. More generally, the results demonstrate that
    predictability hasmultifaceted downstreamconsequences beyond processing in the moment
  • Rossano, F. (2010). Questioning and responding in Italian. Journal of Pragmatics, 42, 2756-2771. doi:10.1016/j.pragma.2010.04.010.

    Abstract

    Questions are design problems for both the questioner and the addressee. They must be produced as recognizable objects and must be comprehended by taking into account the context in which they occur and the local situated interests of the participants. This paper investigates how people do ‘questioning’ and ‘responding’ in Italian ordinary conversations. I focus on the features of both questions and responses. I first discuss formal linguistic features that are peculiar to questions in terms of intonation contours (e.g. final rise), morphology (e.g. tags and question words) and syntax (e.g. inversion). I then show additional features that characterize their actual implementation in conversation such as their minimality (often the subject or the verb is only implied) and the usual occurrence of speaker gaze towards the recipient during questions. I then look at which social actions (e.g. requests for information, requests for confirmation) the different question types implement and which responses are regularly produced in return. The data shows that previous descriptions of “interrogative markings” are neither adequate nor sufficient to comprehend the actual use of questions in natural conversation.
  • Rossi, G. (2018). Composite social actions: The case of factual declaratives in everyday interaction. Research on Language and Social Interaction, 51(4), 379-397. doi:10.1080/08351813.2018.1524562.

    Abstract

    When taking a turn at talk, a speaker normally accomplishes a sequential action such as a question, answer, complaint, or request. Sometimes, however, a turn at talk may accomplish not a single but a composite action, involving a combination of more than one action. I show that factual declaratives (e.g., “the feed drip has finished”) are recurrently used to implement composite actions consisting of both an informing and a request or, alternatively, a criticism and a request. A key determinant between these is the recipient’s epistemic access to what the speaker is describing. Factual declaratives afford a range of possible responses, which can tell us how the composite action has been understood and give us insights into its underlying structure. Evidence for the stacking of composite actions, however, is not always directly available in the response and may need to be pieced together with the help of other linguistic and contextual considerations. Data are in Italian with English translation.
  • De Rover, M., Petersson, K. M., Van der Werf, S. P., Cools, A. R., Berger, H. J., & Fernández, G. (2008). Neural correlates of strategic memory retrieval: Differentiating between spatial-associative and temporal-associative strategies. Human Brain Mapping, 29, 1068-1079. doi:10.1002/hbm.20445.

    Abstract

    Remembering complex, multidimensional information typically requires strategic memory retrieval, during which information is structured, for instance by spatial- or temporal associations. Although brain regions involved in strategic memory retrieval in general have been identified, differences in retrieval operations related to distinct retrieval strategies are not well-understood. Thus, our aim was to identify brain regions whose activity is differentially involved in spatial-associative and temporal-associative retrieval. First, we showed that our behavioral paradigm probing memory for a set of object-location associations promoted the use of a spatial-associative structure following an encoding condition that provided multiple associations to neighboring objects (spatial-associative condition) and the use of a temporal- associative structure following another study condition that provided predominantly temporal associations between sequentially presented items (temporal-associative condition). Next, we used an adapted version of this paradigm for functional MRI, where we contrasted brain activity related to the recall of object-location associations that were either encoded in the spatial- or the temporal-associative condition. In addition to brain regions generally involved in recall, we found that activity in higher-order visual regions, including the fusiform gyrus, the lingual gyrus, and the cuneus, was relatively enhanced when subjects used a spatial-associative structure for retrieval. In contrast, activity in the globus pallidus and the thalamus was relatively enhanced when subjects used a temporal-associative structure for retrieval. In conclusion, we provide evidence for differential involvement of these brain regions related to different types of strategic memory retrieval and the neural structures described play a role in either spatial-associative or temporal-associative memory retrieval.
  • Rowland, C. F. (2018). The principles of scientific inquiry. Linguistic Approaches to Bilingualism, 8(6), 770-775. doi:10.1075/lab.18056.row.
  • Ruano, D., Abecasis, G. R., Glaser, B., Lips, E. S., Cornelisse, L. N., de Jong, A. P. H., Evans, D. M., Davey Smith, G., Timpson, N. J., Smit, A. B., Heutink, P., Verhage, M., & Posthuma, D. (2010). Functional gene group analysis reveals a role of synaptic heterotrimeric G proteins in cognitive ability. American Journal of Human Genetics, 86(2), 113-125. doi:10.1016/j.ajhg.2009.12.006.

    Abstract

    Although cognitive ability is a highly heritable complex trait, only a few genes have been identified, explaining relatively low proportions of the observed trait variation. This implies that hundreds of genes of small effect may be of importance for cognitive ability. We applied an innovative method in which we tested for the effect of groups of genes defined according to cellular function (functional gene group analysis). Using an initial sample of 627 subjects, this functional gene group analysis detected that synaptic heterotrimeric guanine nucleotide binding proteins (G proteins) play an important role in cognitive ability (P(EMP) = 1.9 x 10(-4)). The association with heterotrimeric G proteins was validated in an independent population sample of 1507 subjects. Heterotrimeric G proteins are central relay factors between the activation of plasma membrane receptors by extracellular ligands and the cellular responses that these induce, and they can be considered a point of convergence, or a "signaling bottleneck." Although alterations in synaptic signaling processes may not be the exclusive explanation for the association of heterotrimeric G proteins with cognitive ability, such alterations may prominently affect the properties of neuronal networks in the brain in such a manner that impaired cognitive ability and lower intelligence are observed. The reported association of synaptic heterotrimeric G proteins with cognitive ability clearly points to a new direction in the study of the genetic basis of cognitive ability.
  • Rubio-Fernández, P. (2018). Trying to discredit the Duplo task with a partial replication: Reply to Paulus and Kammermeier (2018). Cognitive Development, 48, 286-288. doi:10.1016/j.cogdev.2018.07.006.

    Abstract

    Kammermeier and Paulus (2018) report a partial replication of the results of Rubio-Fernández and Geurts (2013) but present their study as a failed replication. Paulus and Kammermeier (2018) insist on a negative interpretation of their findings, discrediting the Duplo task against their own empirical evidence. Here I argue that Paulus and Kammermeier may try to make an impactful contribution to the field by adding to the growing skepticism towards early Theory of Mind studies, but fail to make any significant contribution to our understanding of young children’s Theory of Mind abilities.
  • Rubio-Fernández, P. (2018). What do failed (and successful) replications with the Duplo task show? Cognitive Development, 48, 316-320. doi:10.1016/j.cogdev.2018.07.004.
  • Rubio-Fernández, P. (2008). Concept narrowing: The role of context-independent information. Journal of Biomedical Semantics, 25(4), 381-409. doi:10.1093/jos/ffn004.

    Abstract

    The present study aims to investigate the extent to which the process of lexical interpretation is context dependent. It has been uncontroversially agreed in psycholinguistics that interpretation is always affected by sentential context. The major debate in lexical processing research has revolved around the question of whether initial semantic activation is context sensitive or rather exhaustive, that is, whether the effect of context occurs before or only after the information associated to a concept has been accessed from the mental lexicon. However, within post-lexical access processes, the question of whether the selection of a word's meaning components is guided exclusively by contextual relevance, or whether certain meaning components might be selected context independently, has not been such an important focus of research. I have investigated this question in the two experiments reported in this paper and, moreover, have analysed the role that context-independent information in concepts might play in word interpretation. This analysis differs from previous studies on lexical processing in that it places experimental work in the context of a theoretical model of lexical pragmatics.
  • Rueschemeyer, S.-A., van Rooij, D., Lindemann, O., Willems, R. M., & Bekkering, H. (2010). The function of words: Distinct neural correlates for words denoting differently manipulable objects. Journal of Cognitive Neuroscience, 22, 1844-1851. doi:10.1162/jocn.2009.21310.

    Abstract

    Recent research indicates that language processing relies on brain areas dedicated to perception and action. For example, processing words denoting manipulable objects has been shown to activate a fronto-parietal network involved in actual tool use. This is suggested to reflect the knowledge the subject has about how objects are moved and used. However, information about how to use an object may be much more central to the conceptual representation of an object than information about how to move an object. Therefore, there may be much more fine-grained distinctions between objects on the neural level, especially related to the usability of manipulable objects. In the current study, we investigated whether a distinction can be made between words denoting (1) objects that can be picked up to move (e.g., volumetrically manipulable objects: bookend, clock) and (2) objects that must be picked up to use (e.g., functionally manipulable objects: cup, pen). The results show that functionally manipulable words elicit greater levels of activation in the fronto-parietal sensorimotor areas than volumetrically manipulable words. This suggests that indeed a distinction can be made between different types of manipulable objects. Specifically, how an object is used functionally rather than whether an object can be displaced with the hand is reflected in semantic representations in the brain.
  • De Ruiter, J. P., & Levinson, S. C. (2008). A biological infrastructure for communication underlies the cultural evolution of languages [Commentary on Christiansen & Chater: Language as shaped by the brain]. Behavioral and Brain Sciences, 31(5), 518-518. doi:10.1017/S0140525X08005086.

    Abstract

    Universal Grammar (UG) is indeed evolutionarily implausible. But if languages are just “adapted” to a large primate brain, it is hard to see why other primates do not have complex languages. The answer is that humans have evolved a specialized and uniquely human cognitive architecture, whose main function is to compute mappings between arbitrary signals and communicative intentions. This underlies the development of language in the human species.
  • De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Hagoort, P., Levinson, S. C., & Toni, I. (2010). Exploring the cognitive infrastructure of communication. Interaction studies, 11, 51-77. doi:10.1075/is.11.1.05rui.

    Abstract

    Human communication is often thought about in terms of transmitted messages in a conventional code like a language. But communication requires a specialized interactive intelligence. Senders have to be able to perform recipient design, while receivers need to be able to do intention recognition, knowing that recipient design has taken place. To study this interactive intelligence in the lab, we developed a new task that taps directly into the underlying abilities to communicate in the absence of a conventional code. We show that subjects are remarkably successful communicators under these conditions, especially when senders get feedback from receivers. Signaling is accomplished by the manner in which an instrumental action is performed, such that instrumentally dysfunctional components of an action are used to convey communicative intentions. The findings have important implications for the nature of the human communicative infrastructure, and the task opens up a line of experimentation on human communication.
  • Russel, A., & Trilsbeek, P. (2004). ELAN Audio Playback. Language Archive Newsletter, 1(4), 12-13.
  • Russel, A., & Wittenburg, P. (2004). ELAN Native Media Handling. Language Archive Newsletter, 1(3), 12-12.
  • Sach, M., Seitz, R. J., & Indefrey, P. (2004). Unified inflectional processing of regular and irregular verbs: A PET study. NeuroReport, 15(3), 533-537. doi:10.1097/01.wnr.0000113529.32218.92.

    Abstract

    Psycholinguistic theories propose different models of inflectional processing of regular and irregular verbs: dual mechanism models assume separate modules with lexical frequency sensitivity for irregular verbs. In contradistinction, connectionist models propose a unified process in a single module.We conducted a PET study using a 2 x 2 design with verb regularity and frequency.We found significantly shorter voice onset times for regular verbs and high frequency verbs irrespective of regularity. The PET data showed activations in inferior frontal gyrus (BA 45), nucleus lentiformis, thalamus, and superior medial cerebellum for both regular and irregular verbs but no dissociation for verb regularity.Our results support common processing components for regular and irregular verb inflection.
  • Salomo, D., Lieven, E., & Tomasello, M. (2010). Young children's sensitivity to new and given information when answering predicate-focus questions. Applied Psycholinguistics, 31, 101-115. doi:10.1017/S014271640999018X.

    Abstract

    In two studies we investigated 2-year-old children's answers to predicate-focus questions depending on the preceding context. Children were presented with a successive series of short video clips showing transitive actions (e.g., frog washing duck) in which either the action (action-new) or the patient (patient-new) was the changing, and therefore new, element. During the last scene the experimenter asked the question (e.g., “What's the frog doing now?”). We found that children expressed the action and the patient in the patient-new condition but expressed only the action in the action-new condition. These results show that children are sensitive to both the predicate-focus question and newness in context. A further finding was that children expressed new patients in their answers more often when there was a verbal context prior to the questions than when there was not.
  • San Roque, L., Kendrick, K. H., Norcliffe, E., & Majid, A. (2018). Universal meaning extensions of perception verbs are grounded in interaction. Cognitive Linguistics, 29, 371-406. doi:10.1515/cog-2017-0034.
  • Sauter, D. (2010). Can introspection teach us anything about the perception of sounds? [Book review]. Perception, 39, 1300-1302. doi:10.1068/p3909rvw.

    Abstract

    Reviews the book, Sounds and Perception: New Philosophical Essays edited by Matthew Nudds and Casey O'Callaghan (2010). This collection of thought-provoking philosophical essays contains chapters on particular aspects of sound perception, as well as a series of essays focusing on the issue of sound location. The chapters on specific topics include several perspectives on how we hear speech, one of the most well-studied aspects of auditory perception in empirical research. Most of the book consists of a series of essays approaching the experience of hearing sounds by focusing on where sounds are in space. An impressive range of opinions on this issue is presented, likely thanks to the fact that the book's editors represent dramatically different viewpoints. The wave based view argues that sounds are located near the perceiver, although the sounds also provide information about objects around the listener, including the source of the sound. In contrast, the source based view holds that sounds are experienced as near or at their sources. The editors acknowledge that additional methods should be used in conjunction with introspection, but they argue that theories of perceptual experience should nevertheless respect phenomenology. With such a range of views derived largely from the same introspective methodology, it remains unresolved which phenomenological account is to be respected.
  • Sauter, D., Eisner, F., Ekman, P., & Scott, S. K. (2010). Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations. Proceedings of the National Academy of Sciences, 107(6), 2408-2412. doi:10.1073/pnas.0908239106.

    Abstract

    Emotional signals are crucial for sharing important information, with conspecifics, for example, to warn humans of danger. Humans use a range of different cues to communicate to others how they feel, including facial, vocal, and gestural signals. We examined the recognition of nonverbal emotional vocalizations, such as screams and laughs, across two dramatically different cultural groups. Western participants were compared to individuals from remote, culturally isolated Namibian villages. Vocalizations communicating the so-called “basic emotions” (anger, disgust, fear, joy, sadness, and surprise) were bidirectionally recognized. In contrast, a set of additional emotions was only recognized within, but not across, cultural boundaries. Our findings indicate that a number of primarily negative emotions have vocalizations that can be recognized across cultures, while most positive emotions are communicated with culture-specific signals.
  • Sauter, D. (2010). Are positive vocalizations perceived as communicating happiness across cultural boundaries? [Article addendum]. Communicative & Integrative Biology, 3(5), 440-442. doi:10.4161/cib.3.5.12209.

    Abstract

    Laughter communicates a feeling of enjoyment across cultures, while non-verbal vocalizations of several other positive emotions, such as achievement or sensual pleasure, are recognizable only within, but not across, cultural boundaries. Are these positive vocalizations nevertheless interpreted cross-culturally as signaling positive affect? In a match-to-sample task, positive emotional vocal stimuli were paired with positive and negative facial expressions, by English participants and members of the Himba, a semi-nomadic, culturally isolated Namibian group. The results showed that laughter was associated with a smiling facial expression across both groups, consistent with previous work showing that human laughter is a positive, social signal with deep evolutionary roots. However, non-verbal vocalizations of achievement, sensual pleasure, and relief were not cross-culturally associated with smiling facial expressions, perhaps indicating that these types of vocalizations are not cross-culturally interpreted as communicating a positive emotional state, or alternatively that these emotions are associated with positive facial expression other than smiling. These results are discussed in the context of positive emotional communication in vocal and facial signals. Research on the perception of non-verbal vocalizations of emotions across cultures demonstrates that some affective signals, including laughter, are associated with particular facial configurations and emotional states, supporting theories of emotions as a set of evolved functions that are shared by all humans regardless of cultural boundaries.
  • Sauter, D. (2010). More than happy: The need for disentangling positive emotions. Current Directions in Psychological Science, 19, 36-40. doi:10.1177/0963721409359290.

    Abstract

    Despite great advances in scientific understanding of emotional processes in the last decades, research into the communication of emotions has been constrained by a strong bias toward negative affective states. Typically, studies distinguish between different negative emotions, such as disgust, sadness, anger, and fear. In contrast, most research uses only one category of positive affect, “happiness,” which is assumed to encompass all positive emotional states. This article reviews recent research showing that a number of positive affective states have discrete, recognizable signals. An increased focus on cues other than facial expressions is necessary to understand these positive states and how they are communicated; vocalizations, touch, and postural information offer promising avenues for investigating signals of positive affect. A full scientific understanding of the functions, signals, and mechanisms of emotions requires abandoning the unitary concept of happiness and instead disentangling positive emotions.
  • Sauter, D., Eisner, F., Calder, A. J., & Scott, S. K. (2010). Perceptual cues in nonverbal vocal expressions of emotion. Quarterly Journal of Experimental Psychology, 63(11), 2251-2272. doi:10.1080/17470211003721642.

    Abstract

    Work on facial expressions of emotions (Calder, Burton, Miller, Young, & Akamatsu, 2001) and emotionally inflected speech (Banse & Scherer, 1996) has successfully delineated some of the physical properties that underlie emotion recognition. To identify the acoustic cues used in the perception of nonverbal emotional expressions like laugher and screams, an investigation was conducted into vocal expressions of emotion, using nonverbal vocal analogues of the “basic” emotions (anger, fear, disgust, sadness, and surprise; Ekman & Friesen, 1971; Scott et al., 1997), and of positive affective states (Ekman, 1992, 2003; Sauter & Scott, 2007). First, the emotional stimuli were categorized and rated to establish that listeners could identify and rate the sounds reliably and to provide confusion matrices. A principal components analysis of the rating data yielded two underlying dimensions, correlating with the perceived valence and arousal of the sounds. Second, acoustic properties of the amplitude, pitch, and spectral profile of the stimuli were measured. A discriminant analysis procedure established that these acoustic measures provided sufficient discrimination between expressions of emotional categories to permit accurate statistical classification. Multiple linear regressions with participants' subjective ratings of the acoustic stimuli showed that all classes of emotional ratings could be predicted by some combination of acoustic measures and that most emotion ratings were predicted by different constellations of acoustic features. The results demonstrate that, similarly to affective signals in facial expressions and emotionally inflected speech, the perceived emotional character of affective vocalizations can be predicted on the basis of their physical features.
  • Sauter, D., & Eimer, M. (2010). Rapid detection of emotion from human vocalizations. Journal of Cognitive Neuroscience, 22, 474-481. doi:10.1162/jocn.2009.21215.

    Abstract

    The rapid detection of affective signals from conspecifics is crucial for the survival of humans and other animals; if those around you are scared, there is reason for you to be alert and to prepare for impending danger. Previous research has shown that the human brain detects emotional faces within 150 msec of exposure, indicating a rapid differentiation of visual social signals based on emotional content. Here we use event-related brain potential (ERP) measures to show for the first time that this mechanism extends to the auditory domain, using human nonverbal vocalizations, such as screams. An early fronto-central positivity to fearful vocalizations compared with spectrally rotated and thus acoustically matched versions of the same sounds started 150 msec after stimulus onset. This effect was also observed for other vocalized emotions (achievement and disgust), but not for affectively neutral vocalizations, and was linked to the perceived arousal of an emotion category. That the timing, polarity, and scalp distribution of this new ERP correlate are similar to ERP markers of emotional face processing suggests that common supramodal brain mechanisms may be involved in the rapid detection of affectively relevant visual and auditory signals.
  • Sauter, D., Eisner, F., Ekman, P., & Scott, S. K. (2010). Reply to Gewald: Isolated Himba settlements still exist in Kaokoland [Letter to the editor]. Proceedings of the National Academy of Sciences of the United States of America, 107(18), E76. doi:10.1073/pnas.1002264107.

    Abstract

    We agree with Gewald (1) that historical and anthropological accounts are essential tools for understanding the Himba culture, and these accounts are valuable to both us and him. However, we contest his claim that the Himba individuals in our study were not culturally isolated. Gewald (1) claims that it would be “unlikely” that the Himba people with whom we worked had “not been exposed to the affective signals of individuals from cultural groups other than their own” as stated in our paper (2). Gewald (1) seems to argue that, because outside groups have had contact with some Himba, this means that these events affected all Himba. Yet, the Himba constitute a group of 20,000-50,000 people (3) living in small settlements scattered across the vast Kaokoland region, an area of 49,000 km2 (4).
  • Sauter, D., & Levinson, S. C. (2010). What's embodied in a smile? [Comment on Niedenthal et al.]. Behavioral and Brain Sciences, 33, 457-458. doi:10.1017/S0140525X10001597.

    Abstract

    Differentiation of the forms and functions of different smiles is needed, but they should be based on empirical data on distinctions that senders and receivers make, and the physical cues that are employed. Such data would allow for a test of whether smiles can be differentiated using perceptual cues alone or whether mimicry or simulation are necessary.
  • Scerri, T. S., Fisher, S. E., Francks, C., MacPhie, I. L., Paracchini, S., Richardson, A. J., Stein, J. F., & Monaco, A. P. (2004). Putative functional alleles of DYX1C1 are not associated with dyslexia susceptibility in a large sample of sibling pairs from the UK [Letter to JMG]. Journal of Medical Genetics, 41(11), 853-857. doi:10.1136/jmg.2004.018341.
  • Schaeffer, J., van Witteloostuijn, M., & Creemers, A. (2018). Article choice, theory of mind, and memory in children with high-functioning autism and children with specific language impairment. Applied Psycholinguistics, 39(1), 89-115. doi:10.1017/S0142716417000492.

    Abstract

    Previous studies show that young, typically developing (TD) children (age 5) make errors in the choice between a definite and an indefinite article. Suggested explanations for overgeneration of the definite article include failure to distinguish speaker from hearer assumptions, and for overgeneration of the indefinite article failure to draw scalar implicatures, and weak working memory. However, no direct empirical evidence for these accounts is available. In this study, 27 Dutch-speaking children with high-functioning autism, 27 children with SLI, and 27 TD children aged 5–14 were administered a pragmatic article choice test, a nonverbal theory of mind test, and three types of memory tests (phonological memory, verbal, and nonverbal working memory). The results show that the children with high-functioning autism and SLI (a) make similar errors, that is, they overgenerate the indefinite article; (b) are TD-like at theory of mind, but (c) perform significantly more poorly than the TD children on phonological memory and verbal working memory. We propose that weak memory skills prevent the integration of the definiteness scale with the preceding discourse, resulting in the failure to consistently draw the relevant scalar implicature. This in turn yields the occasional erroneous choice of the indefinite article a in definite contexts.
  • Scharenborg, O., & Boves, L. (2010). Computational modelling of spoken-word recognition processes: Design choices and evaluation. Pragmatics & Cognition, 18, 136-164. doi:10.1075/pc.18.1.06sch.

    Abstract

    Computational modelling has proven to be a valuable approach in developing theories of spoken-word processing. In this paper, we focus on a particular class of theories in which it is assumed that the spoken-word recognition process consists of two consecutive stages, with an 'abstract' discrete symbolic representation at the interface between the stages. In evaluating computational models, it is important to bring in independent arguments for the cognitive plausibility of the algorithms that are selected to compute the processes in a theory. This paper discusses the relation between behavioural studies, theories, and computational models of spoken-word recognition. We explain how computational models can be assessed in terms of the goodness of fit with the behavioural data and the cognitive plausibility of the algorithms. An in-depth analysis of several models provides insights into how computational modelling has led to improved theories and to a better understanding of the human spoken-word recognition process.
  • Scharenborg, O. (2010). Modeling the use of durational information in human spoken-word recognition. Journal of the Acoustical Society of America, 127, 3758-3770. doi:10.1121/1.3377050.

    Abstract

    Evidence that listeners, at least in a laboratory environment, use durational cues to help resolve temporarily ambiguous speech input has accumulated over the past decades. This paper introduces Fine-Tracker, a computational model of word recognition specifically designed for tracking fine-phonetic information in the acoustic speech signal and using it during word recognition. Two simulations were carried out using real speech as input to the model. The simulations showed that the Fine-Tracker, as has been found for humans, benefits from durational information during word recognition, and uses it to disambiguate the incoming speech signal. The availability of durational information allows the computational model to distinguish embedded words from their matrix words first simulation, and to distinguish word final realizations of s from word initial realizations second simulation. Fine-Tracker thus provides the first computational model of human word recognition that is able to extract durational information from the speech signal and to use it to differentiate words.
  • Scharenborg, O., Wan, V., & Ernestus, M. (2010). Unsupervised speech segmentation: An analysis of the hypothesized phone boundaries. Journal of the Acoustical Society of America, 127, 1084-1095. doi:10.1121/1.3277194.

    Abstract

    Despite using different algorithms, most unsupervised automatic phone segmentation methods achieve similar performance in terms of percentage correct boundary detection. Nevertheless, unsupervised segmentation algorithms are not able to perfectly reproduce manually obtained reference transcriptions. This paper investigates fundamental problems for unsupervised segmentation algorithms by comparing a phone segmentation obtained using only the acoustic information present in the signal with a reference segmentation created by human transcribers. The analyses of the output of an unsupervised speech segmentation method that uses acoustic change to hypothesize boundaries showed that acoustic change is a fairly good indicator of segment boundaries: over two-thirds of the hypothesized boundaries coincide with segment boundaries. Statistical analyses showed that the errors are related to segment duration, sequences of similar segments, and inherently dynamic phones. In order to improve unsupervised automatic speech segmentation, current one-stage bottom-up segmentation methods should be expanded into two-stage segmentation methods that are able to use a mix of bottom-up information extracted from the speech signal and automatically derived top-down information. In this way, unsupervised methods can be improved while remaining flexible and language-independent.
  • Scheeringa, R., Bastiaansen, M. C. M., Petersson, K. M., Oostenveld, R., Norris, D. G., & Hagoort, P. (2008). Frontal theta EEG activity correlates negatively with the default mode network in resting state. International Journal of Psychophysiology, 67, 242-251. doi:10.1016/j.ijpsycho.2007.05.017.

    Abstract

    We used simultaneously recorded EEG and fMRI to investigate in which areas the BOLD signal correlates with frontal theta power changes, while subjects were quietly lying resting in the scanner with their eyes open. To obtain a reliable estimate of frontal theta power we applied ICA on band-pass filtered (2–9 Hz) EEG data. For each subject we selected the component that best matched the mid-frontal scalp topography associated with the frontal theta rhythm. We applied a time-frequency analysis on this component and used the time course of the frequency bin with the highest overall power to form a regressor that modeled spontaneous fluctuations in frontal theta power. No significant positive BOLD correlations with this regressor were observed. Extensive negative correlations were observed in the areas that together form the default mode network. We conclude that frontal theta activity can be seen as an EEG index of default mode network activity.
  • Schijven, D., Kofink, D., Tragante, V., Verkerke, M., Pulit, S. L., Kahn, R. S., Veldink, J. H., Vinkers, C. H., Boks, M. P., & Luykx, J. J. (2018). Comprehensive pathway analyses of schizophrenia risk loci point to dysfunctional postsynaptic signaling. Schizophrenia Research, 199, 195-202. doi:10.1016/j.schres.2018.03.032.

    Abstract

    Large-scale genome-wide association studies (GWAS) have implicated many low-penetrance loci in schizophrenia. However, its pathological mechanisms are poorly understood, which in turn hampers the development of novel pharmacological treatments. Pathway and gene set analyses carry the potential to generate hypotheses about disease mechanisms and have provided biological context to genome-wide data of schizophrenia. We aimed to examine which biological processes are likely candidates to underlie schizophrenia by integrating novel and powerful pathway analysis tools using data from the largest Psychiatric Genomics Consortium schizophrenia GWAS (N=79,845) and the most recent 2018 schizophrenia GWAS (N=105,318). By applying a primary unbiased analysis (Multi-marker Analysis of GenoMic Annotation; MAGMA) to weigh the role of biological processes from the Molecular Signatures Database (MSigDB), we identified enrichment of common variants in synaptic plasticity and neuron differentiation gene sets. We supported these findings using MAGMA, Meta-Analysis Gene-set Enrichment of variaNT Associations (MAGENTA) and Interval Enrichment Analysis (INRICH) on detailed synaptic signaling pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG) and found enrichment in mainly the dopaminergic and cholinergic synapses. Moreover, shared genes involved in these neurotransmitter systems had a large contribution to the observed enrichment, protein products of top genes in these pathways showed more direct and indirect interactions than expected by chance, and expression profiles of these genes were largely similar among brain tissues. In conclusion, we provide strong and consistent genetics and protein-interaction informed evidence for the role of postsynaptic signaling processes in schizophrenia, opening avenues for future translational and psychopharmacological studies.
  • Schilberg, L., Engelen, T., Ten Oever, S., Schuhmann, T., De Gelder, B., De Graaf, T. A., & Sack, A. T. (2018). Phase of beta-frequency tACS over primary motor cortex modulates corticospinal excitability. Cortex, 103, 142-152. doi:10.1016/j.cortex.2018.03.001.

    Abstract

    The assessment of corticospinal excitability by means of transcranial magnetic stimulation-induced motor evoked potentials is an established diagnostic tool in neurophysiology and a widely used procedure in fundamental brain research. However, concern about low reliability of these measures has grown recently. One possible cause of high variability of MEPs under identical acquisition conditions could be the influence of oscillatory neuronal activity on corticospinal excitability. Based on research showing that transcranial alternating current stimulation can entrain neuronal oscillations we here test whether alpha or beta frequency tACS can influence corticospinal excitability in a phase-dependent manner. We applied tACS at individually calibrated alpha- and beta-band oscillation frequencies, or we applied sham tACS. Simultaneous single TMS pulses time locked to eight equidistant phases of the ongoing tACS signal evoked MEPs. To evaluate offline effects of stimulation frequency, MEP amplitudes were measured before and after tACS. To evaluate whether tACS influences MEP amplitude, we fitted one-cycle sinusoids to the average MEPs elicited at the different phase conditions of each tACS frequency. We found no frequency-specific offline effects of tACS. However, beta-frequency tACS modulation of MEPs was phase-dependent. Post hoc analyses suggested that this effect was specific to participants with low (<19 Hz) intrinsic beta frequency. In conclusion, by showing that beta tACS influences MEP amplitude in a phase-dependent manner, our results support a potential role attributed to neuronal oscillations in regulating corticospinal excitability. Moreover, our findings may be useful for the development of TMS protocols that improve the reliability of MEPs as a meaningful tool for research applications or for clinical monitoring and diagnosis. (C) 2018 Elsevier Ltd. All rights reserved.
  • Schiller, N. O., Fikkert, P., & Levelt, C. C. (2004). Stress priming in picture naming: An SOA study. Brain and Language, 90(1-3), 231-240. doi:10.1016/S0093-934X(03)00436-X.

    Abstract

    This study investigates whether or not the representation of lexical stress information can be primed during speech production. In four experiments, we attempted to prime the stress position of bisyllabic target nouns (picture names) having initial and final stress with auditory prime words having either the same or different stress as the target (e.g., WORtel–MOtor vs. koSTUUM–MOtor; capital letters indicate stressed syllables in prime–target pairs). Furthermore, half of the prime words were semantically related, the other half unrelated. Overall, picture names were not produced faster when the prime word had the same stress as the target than when the prime had different stress, i.e., there was no stress-priming effect in any experiment. This result would not be expected if stress were stored in the lexicon. However, targets with initial stress were responded to faster than final-stress targets. The reason for this effect was neither the quality of the pictures nor frequency of occurrence or voice-key characteristics. We hypothesize here that this stress effect is a genuine encoding effect, i.e., words with stress on the second syllable take longer to be encoded because their stress pattern is irregular with respect to the lexical distribution of bisyllabic stress patterns, even though it can be regular with respect to metrical stress rules in Dutch. The results of the experiments are discussed in the framework of models of phonological encoding.
  • Schiller, N. O., & De Ruiter, J. P. (2004). Some notes on priming, alignment, and self-monitoring [Commentary]. Behavioral and Brain Sciences, 27(2), 208-209. doi:10.1017/S0140525X0441005X.

    Abstract

    Any complete theory of speaking must take the dialogical function of language use into account. Pickering & Garrod (P&G) make some progress on this point. However, we question whether their interactive alignment model is the optimal approach. In this commentary, we specifically criticize (1) their notion of alignment being implemented through priming, and (2) their claim that self-monitoring can occur at all levels of linguistic representation.
  • Schiller, N. O. (2004). The onset effect in word naming. Journal of Memory and Language, 50(4), 477-490. doi:10.1016/j.jml.2004.02.004.

    Abstract

    This study investigates whether or not masked form priming effects in the naming task depend on the number of shared segments between prime and target. Dutch participants named bisyllabic words, which were preceded by visual masked primes. When primes shared the initial segment(s) with the target, naming latencies were shorter than in a control condition (string of percent signs). Onset complexity (singleton vs. complex word onset) did not modulate this priming effect in Dutch. Furthermore, significant priming due to shared final segments was only found when the prime did not contain a mismatching onset, suggesting an interfering role of initial non-target segments. It is concluded that (a) degree of overlap (segmental match vs. mismatch), and (b) position of overlap (initial vs. final) influence the magnitude of the form priming effect in the naming task. A modification of the segmental overlap hypothesis (Schiller, 1998) is proposed to account for the data.
  • Schiller, N. O. (1998). The effect of visually masked syllable primes on the naming latencies of words and pictures. Journal of Memory and Language, 39, 484-507. doi:10.1006/jmla.1998.2577.

    Abstract

    To investigate the role of the syllable in Dutch speech production, five experiments were carried out to examine the effect of visually masked syllable primes on the naming latencies for written words and pictures. Targets had clear syllable boundaries and began with a CV syllable (e.g., ka.no) or a CVC syllable (e.g., kak.tus), or had ambiguous syllable boundaries and began with a CV[C] syllable (e.g., ka[pp]er). In the syllable match condition, bisyllabic Dutch nouns or verbs were preceded by primes that were identical to the target’s first syllable. In the syllable mismatch condition, the prime was either shorter or longer than the target’s first syllable. A neutral condition was also included. None of the experiments showed a syllable priming effect. Instead, all related primes facilitated the naming of the targets. It is concluded that the syllable does not play a role in the process of phonological encoding in Dutch. Because the amount of facilitation increased with increasing overlap between prime and target, the priming effect is accounted for by a segmental overlap hypothesis.
  • Schiller, N. O., Meyer, A. S., & Levelt, W. J. M. (1997). The syllabic structure of spoken words: Evidence from the syllabification of intervocalic consonants. Language and Speech, 40(2), 103-140.

    Abstract

    A series of experiments was carried out to investigate the syllable affiliation of intervocalic consonants following short vowels, long vowels, and schwa in Dutch. Special interest was paid to words such as letter ['leter] ''id.,'' where a short vowel is followed by a single consonant. On phonological grounds one may predict that the first syllable should always be closed, but earlier psycholinguistic research had shown that speakers tend to leave these syllables open. In our experiments, bisyllabic word forms were presented aurally, and participants produced their syllables in reversed order (Experiments 1 through 5), or repeated the words inserting a pause between the syllables (Experiment 6). The results showed that participants generally closed syllables with a short vowel. However, in a significant number of the cases they produced open short vowel syllables. Syllables containing schwa, like syllables with a long vowel, were hardly ever closed. Word stress, the phonetic quality of the vowel in the first syllable, and the experimental context influenced syllabification. Taken together, the experiments show that native speakers syllabify bisyllabic Dutch nouns in accordance with a small set of prosodic output constraints. To account for the variability of the results, we propose that these constraints differ in their probabilities of being applied.
  • Schillingmann, L., Ernst, J., Keite, V., Wrede, B., Meyer, A. S., & Belke, E. (2018). AlignTool: The automatic temporal alignment of spoken utterances in German, Dutch, and British English for psycholinguistic purposes. Behavior Research Methods, 50(2), 466-489. doi:10.3758/s13428-017-1002-7.

    Abstract

    In language production research, the latency with which speakers produce a spoken response to a stimulus and the onset and offset times of words in longer utterances are key dependent variables. Measuring these variables automatically often yields partially incorrect results. However, exact measurements through the visual inspection of the recordings are extremely time-consuming. We present AlignTool, an open-source alignment tool that establishes preliminarily the onset and offset times of words and phonemes in spoken utterances using Praat, and subsequently performs a forced alignment of the spoken utterances and their orthographic transcriptions in the automatic speech recognition system MAUS. AlignTool creates a Praat TextGrid file for inspection and manual correction by the user, if necessary. We evaluated AlignTool’s performance with recordings of single-word and four-word utterances as well as semi-spontaneous speech. AlignTool performs well with audio signals with an excellent signal-to-noise ratio, requiring virtually no corrections. For audio signals of lesser quality, AlignTool still is highly functional but its results may require more frequent manual corrections. We also found that audio recordings including long silent intervals tended to pose greater difficulties for AlignTool than recordings filled with speech, which AlignTool analyzed well overall. We expect that by semi-automatizing the temporal analysis of complex utterances, AlignTool will open new avenues in language production research.
  • Schimke, S., Verhagen, J., & Dimroth, C. (2008). Particules additives et finitude en néerlandais et allemand L2: Étude expérimentale. Acquisition et Interaction en Language Etrangère, 26, 191-210.

    Abstract

    Cette étude traite de la question de savoir s’il y a une relation entre les équivalents des particules additives ‘aussi’ et ‘de nouveau’ portant sur le topique et la finitude dans la variété des apprenants turcophones du néerlandais et de l’allemand. Dans les données obtenues avec une tâche contrôlée, nous observons que la finitude est moins fréquemment marquée dans les énoncés contenant ces particules que les énoncés comparables qui ne contiennent pas ces particules. Ceci est vrai pour le marquage de la finitude sur les verbes lexicaux ainsi que pour la présence de verbes conjugués sans contenu lexical comme la copule. De plus, nous montrons que les particules peuvent précéder le verbe conjugué dans la langue des apprenants. Ces résultats peuvent être expliqués par la similarité fonctionnelle entre la finitude et les particules portant sur le topique.
  • Schmale, R., Cristia, A., Seidl, A., & Johnson, E. K. (2010). Developmental changes in infants’ ability to cope with dialect variation in word recognition. Infancy, 15, 650-662. doi:10.1111/j.1532-7078.2010.00032.x.

    Abstract

    Toward the end of their first year of life, infants’ overly specified word representations are thought to give way to more abstract ones, which helps them to better cope with variation not relevant to word identity (e.g., voice and affect). This developmental change may help infants process the ambient language more efficiently, thus enabling rapid gains in vocabulary growth. One particular kind of variability that infants must accommodate is that of dialectal accent, because most children will encounter speakers from different regions and backgrounds. In this study, we explored developmental changes in infants’ ability to recognize words in continuous speech by familiarizing them with words spoken by a speaker of their own region (North Midland-American English) or a different region (Southern Ontario Canadian English), and testing them with passages spoken by a speaker of the opposite dialectal accent. Our results demonstrate that 12- but not 9-month-olds readily recognize words in the face of dialectal variation.
  • Schoenmakers, G.-J., & Piepers, J. (2018). Echter kan het wel. Levende Talen Magazine, 105(4), 10-13.
  • Schoffelen, J.-M., Oostenveld, R., & Fries, P. (2008). Imaging the human motor system's beta-band synchronization during isometric contraction. NeuroImage, 41, 437-447. doi:10.1016/j.neuroimage.2008.01.045.

    Abstract

    Rhythmic synchronization likely subserves interactions among neuronal groups. One of the best studied rhythmic synchronization phenomena in the human nervous system is the beta-band (15-30 Hz) synchronization in the motor system. In this study, we imaged structures across the human brain that are synchronized to the motor system's beta rhythm. We recorded whole-head magnetoencephalograms (MEG) and electromyograms (EMG) of left/right extensor carpi radialis muscle during left/right wrist extension. We analyzed coherence, on the one hand between the EMG and neuronal sources in the brain, and on the other hand between different brain sources, using a spatial filtering approach. Cortico-muscular coherence analysis revealed a spatial maximum of coherence to the muscle in motor cortex contralateral to the muscle in accordance with earlier findings. Moreover, by applying a two-dipole source model, we unveiled significantly coherent clusters of voxels in the ipsilateral cerebellar hemisphere and ipsilateral cerebral motor regions. The spatial pattern of coherence to the right and left arm EMG was roughly mirror reversed across the midline, in agreement with known physiology. Subsequently, we analyzed the brain-wide pattern of beta-band coherence to the motor cortex contralateral to the contracting muscle. This analysis did not reveal any convincing pattern. Because the prior cortico-muscular analysis had demonstrated the expected pattern in our data, this negative finding demonstrates a current limitation of the applied method for cortico-cortical coherence analysis. We conclude that during an isometric muscle contraction, several distributed brain regions form a brain-wide beta-band network for motor control.
  • De Schryver, J., Neijt, A., Ghesquière, P., & Ernestus, M. (2008). Analogy, frequency, and sound change: The case of Dutch devoicing. Journal of Germanic Linguistics, 20(2), 159-195. doi:10.1017/S1470542708000056.

    Abstract

    This study investigates the roles of phonetic analogy and lexical frequency in an ongoing sound change, the devoicing of fricatives in Dutch, which occurs mainly in the Netherlands and to a lesser degree in Flanders. In the experiment, Dutch and Flemish students read two variants of 98 words: the standard and a nonstandard form with the incorrect voice value of the fricative. Dutch students chose the non-standard forms with devoiced fricatives more often than Flemish students. Moreover, devoicing, though a gradual process, appeared lexically diffused, affecting first the words that are low in frequency and phonetically similar to words with voiceless fricatives.
  • Schulte im Walde, S., Melinger, A., Roth, M., & Weber, A. (2008). An empirical characterization of response types in German association norms. Research on Language and Computation, 6, 205-238. doi:10.1007/s11168-008-9048-4.

    Abstract

    This article presents a study to distinguish and quantify the various types of semantic associations provided by humans, to investigate their properties, and to discuss the impact that our analyses may have on NLP tasks. Specifically, we concentrate on two issues related to word properties and word relations: (1) We address the task of modelling word meaning by empirical features in data-intensive lexical semantics. Relying on large-scale corpus-based resources, we identify the contextual categories and functions that are activated by the associates and therefore contribute to the salient meaning components of individual words and across words. As a result, we discuss conceptual roles and present evidence for the usefulness of co-occurrence information in distributional descriptions. (2) We assume that semantic associates provide a means to investigate the range of semantic relations between words and contexts, and we provide insight into which types of semantic relations are treated as important or salient by the speakers of the language.

    Files private

    Request files
  • Schumacher, M., & Skiba, R. (1992). Prädikative und modale Ausdrucksmittel in den Lernervarietäten einer polnischen Migrantin: Eine Longitudinalstudie. Teil I. Linguistische Berichte, 141, 371-400.
  • Schumacher, M., & Skiba, R. (1992). Prädikative und modale Ausdrucksmittel in den Lernervarietäten einer polnischen Migrantin: Eine Longitudinalstudie. Teil II. Linguistische Berichte, 142, 451-475.
  • Schwager, W., & Zeshan, U. (2008). Word classes in sign languages: Criteria and classifications. Studies in Language, 32(3), 509-545. doi:10.1075/sl.32.3.03sch.

    Abstract

    The topic of word classes remains curiously under-represented in the sign language literature due to many theoretical and methodological problems in sign linguistics. This article focuses on language-specific classifications of signs into word classes in two different sign languages: German Sign Language and Kata Kolok, the sign language of a village community in Bali. The article discusses semantic and structural criteria for identifying word classes in the target sign languages. On the basis of a data set of signs, these criteria are systematically tested out as a first step towards an inductive classification of signs. Approaches and analyses relating to the problem of word classes in linguistic typology are used for shedding new light on the issue of word class distinctions in sign languages
  • Schweinfurth, M. K., De Troy, S. E., Van Leeuwen, E. J. C., Call, J., & Haun, D. B. M. (2018). Spontaneous social tool use in Chimpanzees (Pan troglodytes). Journal of Comparative Psychology, 132(4), 455-463. doi:10.1037/com0000127.

    Abstract

    Although there is good evidence that social animals show elaborate cognitive skills to deal with others, there are few reports of animals physically using social agents and their respective responses as means to an end—social tool use. In this case study, we investigated spontaneous and repeated social tool use behavior in chimpanzees (Pan troglodytes). We presented a group of chimpanzees with an apparatus, in which pushing two buttons would release juice from a distantly located fountain. Consequently, any one individual could only either push the buttons or drink from the fountain but never push and drink simultaneously. In this scenario, an adult male attempted to retrieve three other individuals and push them toward the buttons that, if pressed, released juice from the fountain. With this strategy, the social tool user increased his juice intake 10-fold. Interestingly, the strategy was stable over time, which was possibly enabled by playing with the social tools. With over 100 instances, we provide the biggest data set on social tool use recorded among nonhuman animals so far. The repeated use of other individuals as social tools may represent a complex social skill linked to Machiavellian intelligence.
  • Schwichtenberg, B., & Schiller, N. O. (2004). Semantic gender assignment regularities in German. Brain and Language, 90(1-3), 326-337. doi:10.1016/S0093-934X(03)00445-0.

    Abstract

    Gender assignment relates to a native speaker's knowledge of the structure of the gender system of his/her language, allowing the speaker to select the appropriate gender for each noun. Whereas categorical assignment rules and exceptional gender assignment are well investigated, assignment regularities, i.e., tendencies in the gender distribution identified within the vocabulary of a language, are still controversial. The present study is an empirical contribution trying to shed light on the gender assignment system native German speakers have at their disposal. Participants presented with a category (e.g., predator) and a pair of gender-marked pseudowords (e.g., der Trelle vs. die Stisse) preferentially selected the pseudo-word preceded by the gender-marked determiner ‘‘associated’’ with the category (e.g., masculine). This finding suggests that semantic regularities might be part of the gender assignment system of native speakers.
  • Seeliger, K., Fritsche, M., Güçlü, U., Schoenmakers, S., Schoffelen, J.-M., Bosch, S. E., & Van Gerven, M. A. J. (2018). Convolutional neural network-based encoding and decoding of visual object recognition in space and time. NeuroImage, 180, 253-266. doi:10.1016/j.neuroimage.2017.07.018.

    Abstract

    Representations learned by deep convolutional neural networks (CNNs) for object recognition are a widely
    investigated model of the processing hierarchy in the human visual system. Using functional magnetic resonance
    imaging, CNN representations of visual stimuli have previously been shown to correspond to processing stages in
    the ventral and dorsal streams of the visual system. Whether this correspondence between models and brain
    signals also holds for activity acquired at high temporal resolution has been explored less exhaustively. Here, we
    addressed this question by combining CNN-based encoding models with magnetoencephalography (MEG).
    Human participants passively viewed 1,000 images of objects while MEG signals were acquired. We modelled
    their high temporal resolution source-reconstructed cortical activity with CNNs, and observed a feed-forward
    sweep across the visual hierarchy between 75 and 200 ms after stimulus onset. This spatiotemporal cascade
    was captured by the network layer representations, where the increasingly abstract stimulus representation in the
    hierarchical network model was reflected in different parts of the visual cortex, following the visual ventral
    stream. We further validated the accuracy of our encoding model by decoding stimulus identity in a left-out
    validation set of viewed objects, achieving state-of-the-art decoding accuracy.
  • Segaert, K., Mazaheri, A., & Hagoort, P. (2018). Binding language: Structuring sentences through precisely timed oscillatory mechanisms. European Journal of Neuroscience, 48(7), 2651-2662. doi:10.1111/ejn.13816.

    Abstract

    Syntactic binding refers to combining words into larger structures. Using EEG, we investigated the neural processes involved in syntactic binding. Participants were auditorily presented two-word sentences (i.e. pronoun and pseudoverb such as ‘I grush’, ‘she grushes’, for which syntactic binding can take place) and wordlists (i.e. two pseudoverbs such as ‘pob grush’, ‘pob grushes’, for which no binding occurs). Comparing these two conditions, we targeted syntactic binding while minimizing contributions of semantic binding and of other cognitive processes such as working memory. We found a converging pattern of results using two distinct analysis approaches: one approach using frequency bands as defined in previous literature, and one data-driven approach in which we looked at the entire range of frequencies between 3-30 Hz without the constraints of pre-defined frequency bands. In the syntactic binding (relative to the wordlist) condition, a power increase was observed in the alpha and beta frequency range shortly preceding the presentation of the target word that requires binding, which was maximal over frontal-central electrodes. Our interpretation is that these signatures reflect that language comprehenders expect the need for binding to occur. Following the presentation of the target word in a syntactic binding context (relative to the wordlist condition), an increase in alpha power maximal over a left lateralized cluster of frontal-temporal electrodes was observed. We suggest that this alpha increase relates to syntactic binding taking place. Taken together, our findings suggest that increases in alpha and beta power are reflections of distinct the neural processes underlying syntactic binding.
  • Seidl, A., & Cristia, A. (2008). Developmental changes in the weighting of prosodic cues. Developmental Science, 11, 596-606. doi:10.1111/j.1467-7687.2008.00704.x.

    Abstract

    Previous research has shown that the weighting of, or attention to, acoustic cues at the level of the segment changes over the course of development (Nittrouer & Miller, 1997; Nittrouer, Manning & Meyer, 1993). In this paper we examined changes over the course of development in weighting of acoustic cues at the suprasegmental level. Specifically, we tested English-learning 4-month-olds’ performance on a clause segmentation task when each of three acoustic cues to clausal units was neutralized and contrasted it with performance on a Baseline condition where no cues were manipulated. Comparison with the reported performance of 6-month-olds on the same task (Seidl, 2007) reveals that 4-month-olds weight prosodic cues to clausal boundaries differently than 6-month-olds, relying more heavily on all three correlates of clausal boundaries (pause, pitch and vowel duration) than 6-month-olds do, who rely primarily on pitch. We interpret this as evidence that 4-month-olds use a holistic processing strategy, while 6-month-olds may already be able to attend separately to isolated cues in the input stream and may, furthermore, be able to exploit a language-specific cue weighting. Thus, in a way similar to that in other cognitive domains, infants begin as holistic auditory scene processors and are only later able to process individual auditory cues.
  • Seifart, F., Drude, S., Franchetto, B., Gasché, J., Golluscio, L., & Manrique, E. (2008). Language documentation and archives in South America. Language Documentation and Conservation, 2(1), 130-140. Retrieved from http://nflrc.hawaii.edu/ldc/June2008/.

    Abstract

    This paper addresses a set of issues related to language documentation that are not often explicitly dealt with in academic publications, yet are highly important for the development and success of this new discipline. These issues include embedding language documentation in the socio-political context not only at the community level but also at the national level, the ethical and technical challenges of digital language archives, and the importance of regional and international cooperation among documentation activities. These issues play a major role in the initiative to set up a network of regional language archives in three South American countries, which this paper reports on. Local archives for data on endangered languages have recently been set up in Iquitos (Peru), Buenos Aires (Argentina), and in various locations in Brazil. An important feature of these is that they provide fast and secure access to linguistic and cultural data for local researchers and the language communities. They also make data safer by allowing for regular update procedures within the network.
  • Seifart, F., Evans, N., Hammarström, H., & Levinson, S. C. (2018). Language documentation twenty-five years on. Language, 94(4), e324-e345. doi:10.1353/lan.2018.0070.

    Abstract

    This discussion note reviews responses of the linguistics profession to the grave issues of language
    endangerment identified a quarter of a century ago in the journal Language by Krauss,
    Hale, England, Craig, and others (Hale et al. 1992). Two and a half decades of worldwide research
    not only have given us a much more accurate picture of the number, phylogeny, and typological
    variety of the world’s languages, but they have also seen the development of a wide range of new
    approaches, conceptual and technological, to the problem of documenting them. We review these
    approaches and the manifold discoveries they have unearthed about the enormous variety of linguistic
    structures. The reach of our knowledge has increased by about 15% of the world’s languages,
    especially in terms of digitally archived material, with about 500 languages now
    reasonably documented thanks to such major programs as DoBeS, ELDP, and DEL. But linguists
    are still falling behind in the race to document the planet’s rapidly dwindling linguistic diversity,
    with around 35–42% of the world’s languages still substantially undocumented, and in certain
    countries (such as the US) the call by Krauss (1992) for a significant professional realignment toward
    language documentation has only been heeded in a few institutions. Apart from the need for
    an intensified documentarist push in the face of accelerating language loss, we argue that existing
    language documentation efforts need to do much more to focus on crosslinguistically comparable
    data sets, sociolinguistic context, semantics, and interpretation of text material, and on methods
    for bridging the ‘transcription bottleneck’, which is creating a huge gap between the amount we
    can record and the amount in our transcribed corpora.*
  • Sekine, K. (2008). A review of psychological studies on development of spontaneous gestures in preschool age. The Japanese Journal of Educational Psychology, 56(3), 440-453. doi:10.5926/jjep1953.56.3_440.

    Abstract

    Previous studies of the development of gestures have examined gestures in infants. In recent years, together with the rise of interest in spontaneous gestures accompanied by speech, research on spontaneous gestures in preschool-age children has increased. But little has been reported in terms of systematic developmental changes in children's spontaneous gestures, especially with respect to preschool-age children. The present paper surveys domestic and international research on the development of spontaneous gestures in preschoolers. When gestures seen in infants and preschool-age and older children were categorized, it was found that spontaneous gestures begin to appear together with speech semantically and temporarily by the end of the one-word period; during this same period, gestures that were seen earlier gradually decrease. It is suggested that the development of spontaneous gestures relates to a sentence level, not to a vocabulary level. Based on growth point theory (McNeill, 1992), it is also argued that spontaneous gestures develop with “thinking for speaking” and symbol ability.
  • Sekine, K., & Furuyama, N. (2010). Developmental change of discourse cohesion in speech and gestures among Japanese elementary school children. Rivista di psicolinguistica applicata, 10(3), 97-116. doi:10.1400/152613.

    Abstract

    This study investigates the development of bi-modal reference maintenance by focusing on how Japanese elementary school children introduce and track animate referents in their narratives. Sixty elementary school children participated in this study, 10 from each school year (from 7 to 12 years of age). They were instructed to remember a cartoon and retell the story to their parents. We found that although there were no differences in the speech indices among the different ages, the average scores for the gesture indices of the 12-year-olds were higher than those of the other age groups. In particular, the amount of referential gestures radically increased at 12, and these children tended to use referential gestures not only for tracking referents but also for introducing characters. These results indicate that the ability to maintain a reference to create coherent narratives increases at about age 12.
  • Sekine, K., Wood, C., & Kita, S. (2018). Gestural depiction of motion events in narrative increases symbolic distance with age. Language, Interaction and Acquisition, 9(1), 11-21. doi:10.1075/lia.15020.sek.

    Abstract

    We examined gesture representation of motion events in narratives produced by three- and nine-year-olds, and adults. Two aspects of gestural depiction were analysed: how protagonists were depicted, and how gesture space was used. We found that older groups were more likely to express protagonists as an object that a gesturing hand held and manipulated, and less likely to express protagonists with whole-body enactment gestures. Furthermore, for older groups, gesture space increasingly became less similar to narrated space. The older groups were less likely to use large gestures or gestures in the periphery of the gesture space to represent movements that were large relative to a protagonist’s body or that took place next to a protagonist. They were also less likely to produce gestures on a physical surface (e.g. table) to represent movement on a surface in narrated events. The development of gestural depiction indicates that older speakers become less immersed in the story world and start to control and manipulate story representation from an outside perspective in a bounded and stage-like gesture space. We discuss this developmental shift in terms of increasing symbolic distancing (Werner & Kaplan, 1963).
  • Sekine, K. (2010). The role of gestures contributing to speech production in children. The Japanese Journal of Qualitative Psychology, 9, 115-132.
  • Senft, G. (2008). The case: The Trobriand Islanders vs H.P. Grice: Kilivila and the Gricean maxims of quality and manner. Anthropos, 103, 139-147.

    Abstract

    The Gricean maxim of Quality “Try to make your contribution one that is true” and his maxim of Manner “Be perspicuous” are not observed in Kilivila, the Austronesian language of the Trobriand Islanders of Papua New Guinea. Speakers of Kilivila metalinguistically differentiate registers of their language. One of these varieties is called biga sopa. This label can be glossed as “joking or lying speech, indirect speech, speech which is not vouched for.” The biga sopa constitutes the default register of Trobriand discourse. This article describes the concept of sopa, presents its features, and discusses and illustrates its functions and use within Trobriand society. The article ends with a discussion of the relevance of Gricean maxims for the research of everyday verbal interaction in Kilivila and a general criticism of these maxims, especially from an anthropological linguistic perspective. [Trobriand Islanders, Gricean maxims, varieties of Kilivila, Kilivila sopa, un-plain speaking]
  • Senft, G. (1992). Bakavilisi Biga - or: What happens to English words in the Kilivila Language? Language and Linguistics in Melanesia, 23, 13-49.
  • Senft, G. (1998). Body and mind in the Trobriand Islands. Ethos, 26, 73-104. doi:10.1525/eth.1998.26.1.73.

    Abstract

    This article discusses how the Trobriand Islanders speak about body and mind. It addresses the following questions: do the linguistic datafit into theories about lexical universals of body-part terminology? Can we make inferences about the Trobrianders' conceptualization of psychological and physical states on the basis of these data? If a Trobriand Islander sees these idioms as external manifestations of inner states, then can we interpret them as a kind of ethnopsychological theory about the body and its role for emotions, knowledge, thought, memory, and so on? Can these idioms be understood as representation of Trobriand ethnopsychological theory?
  • Senft, G. (1985). Emic or etic or just another catch 22? A repartee to Hartmut Haberland. Journal of Pragmatics, 9, 845.
  • Senft, G. (2010). Argonauten mit Außenbordmotoren - Feldforschung auf den Trobriand-Inseln (Papua-Neuguinea) seit 1982. Mitteilungen der Berliner Gesellschaft für Anthropologie, Ethnologie und Urgeschichte, 31, 115-130.

    Abstract

    Seit 1982 erforsche ich die Sprache und die Kultur der Trobriand-Insulaner in Papua-Neuguinea. Nach inzwischen 15 Reisen zu den Trobriand-Inseln, die sich bis heute zu nahezu vier Jahren Leben und Arbeit im Dorf Tauwema auf der Insel Kaile'una addieren, wurde ich von Markus Schindlbeck und Alix Hänsel dazu eingeladen, den Mitgliedern der „Berliner Gesellschaft für Anthropologie, Ethnologie und Urgeschichte“ über meine Feldforschungen zu berichten. Das werde ich im Folgenden tun. Zunächst beschreibe ich, wie ich zu den Trobriand-Inseln kam, wie ich mich dort zurechtgefunden habe und berichte dann, welche Art von Forschung ich all die Jahre betrieben, welche Formen von Sprach- und Kulturwandel ich dabei beobachtet und welche Erwartungen ich auf der Basis meiner bisherigen Erfahrungen für die Zukunft der Trobriander und für ihre Sprache und ihre Kultur habe.
  • Senft, G. (1998). [Review of the book Anthropological linguistics: An introduction by William A. Foley]. Linguistics, 36, 995-1001.
  • Senft, G. (2010). [Review of the book Consequences of contact: Language ideologies and sociocultural transformations in Pacific societies ed. by Miki Makihara and Bambi B. Schieffelin]. Paideuma. Mitteilungen zur Kulturkunde, 56, 308-313.
  • Senft, G. (1997). [Review of the book The design of language: An introduction to descriptive linguistics by Terry Crowley, John Lynch, Jeff Siegel, and Julie Piau]. Linguistics, 35, 781-785.
  • Senft, G. (1992). [Review of the book The Yimas language of New Guinea by William A. Foley]. Linguistics, 30, 634-639.
  • Senft, G. (2008). [Review of the book Expeditionen in die Südsee: Begleitbuch zur Ausstellung und Geschichte der Südsee Sammlung des Ethnologischen Museums ed. by Markus Schindlbeck]. Paideuma, 54, 317-320.
  • Senft, G. (2004). [Review of the book Serial verbs in Oceanic: A descriptive typology by Terry Crowley]. Linguistics, 42(4), 855-859. doi:10.1515/ling.2004.028, 08/06/2004.
  • Senft, G. (2004). [Review of the book The Oceanic Languages by John Lynch, Malcolm Ross and Terry Crowley]. Linguistics, 42(2), 515-520. doi:10.1515/ling.2004.016.
  • Senft, G. (2008). Landscape terms and place names in the Trobriand Islands - The Kaile'una subset. Language Sciences, 30(2/3), 340-361. doi:10.1016/j.langsci.2006.12.001.

    Abstract

    After a brief introduction to the topic the paper first gives an overview of Kilivila landscape terms and then presents the inventory of names for villages, wells, island points, reef-channels and gardens on Kaile’una Island, one of the Trobriand Islands in the Milne Bay Province of Papua New Guinea. The data on the meaning of the place names presented were gathered in 2004 with six male consultants (between the age of 36 and 64 years) living in the village Tauwema on Kaile’una Island. Thus, the list of place names is quite possibly not the complete sample, but it is reasonably representative of the types of Kilivila place names. After discussing the meaning of these terms the paper presents a first attempt to typologically classify and categorize the place names. The paper ends with a critical discussion of the landscape terms and the proposed typology for place names.
  • Senft, G. (1985). How to tell - and understand - a 'dirty' joke in Kilivila. Journal of Pragmatics, 9, 815-834.
  • Senft, G. (1997). Magical conversation on the Trobriand Islands. Anthropos, 92, 369-391.
  • Senft, G. (1985). Kilivila: Die Sprache der Trobriander. Studium Linguistik, 17/18, 127-138.
  • Senft, G. (1985). Klassifikationspartikel im Kilivila: Glossen zu ihrer morphologischen Rolle, ihrem Inventar und ihrer Funktion in Satz und Diskurs. Linguistische Berichte, 99, 373-393.
  • Senft, G. (1992). Everything we always thought we knew about space - but did not bother to question. Working Papers of the Cognitive Anthropology Research group at the MPI for Psycholinguistics, 10.
  • Senft, G. (1985). Weyeis Wettermagie: Eine ethnolinguistische Untersuchung von fünf magischen Formeln eines Wettermagiers auf den Trobriand Inseln. Zeitschrift für Ethnologie, 110(2), 67-90.

Share this page