Publications

Displaying 401 - 500 of 644
  • Nijland, L., & Janse, E. (Eds.). (2009). Auditory processing in speakers with acquired or developmental language disorders [Special Issue]. Clinical Linguistics and Phonetics, 23(3).
  • Noordzij, M., Newman-Norlund, S. E., De Ruiter, J. P., Hagoort, P., Levinson, S. C., & Toni, I. (2009). Brain mechanisms underlying human communication. Frontiers in Human Neuroscience, 3:14. doi:10.3389/neuro.09.014.2009.

    Abstract

    Human communication has been described as involving the coding-decoding of a conventional symbol system, which could be supported by parts of the human motor system (i.e. the “mirror neurons system”). However, this view does not explain how these conventions could develop in the first place. Here we target the neglected but crucial issue of how people organize their non-verbal behavior to communicate a given intention without pre-established conventions. We have measured behavioral and brain responses in pairs of subjects during communicative exchanges occurring in a real, interactive, on-line social context. In two fMRI studies, we found robust evidence that planning new communicative actions (by a sender) and recognizing the communicative intention of the same actions (by a receiver) relied on spatially overlapping portions of their brains (the right posterior superior temporal sulcus). The response of this region was lateralized to the right hemisphere, modulated by the ambiguity in meaning of the communicative acts, but not by their sensorimotor complexity. These results indicate that the sender of a communicative signal uses his own intention recognition system to make a prediction of the intention recognition performed by the receiver. This finding supports the notion that our communicative abilities are distinct from both sensorimotor processes and language abilities.
  • Norcliffe, E., Harris, A., & Jaeger, T. F. (2015). Cross-linguistic psycholinguistics and its critical role in theory development: early beginnings and recent advances. Language, Cognition and Neuroscience, 30(9), 1009-1032. doi:10.1080/23273798.2015.1080373.

    Abstract

    Recent years have seen a small but growing body of psycholinguistic research focused on typologically diverse languages. This represents an important development for the field, where theorising is still largely guided by the often implicit assumption of universality. This paper introduces a special issue of Language, Cognition and Neuroscience devoted to the topic of cross-linguistic and field-based approaches to the study of psycholinguistics. The papers in this issue draw on data from a variety of genetically and areally divergent languages, to address questions in the production and comprehension of phonology, morphology, words, and sentences. To contextualise these studies, we provide an overview of the field of cross-linguistic psycholinguistics, from its early beginnings to the present day, highlighting instances where cross-linguistic data have significantly contributed to psycholinguistic theorising.
  • Norcliffe, E., Konopka, A. E., Brown, P., & Levinson, S. C. (2015). Word order affects the time course of sentence formulation in Tzeltal. Language, Cognition and Neuroscience, 30(9), 1187-1208. doi:10.1080/23273798.2015.1006238.

    Abstract

    The scope of planning during sentence formulation is known to be flexible, as it can be influenced by speakers' communicative goals and language production pressures (among other factors). Two eye-tracked picture description experiments tested whether the time course of formulation is also modulated by grammatical structure and thus whether differences in linear word order across languages affect the breadth and order of conceptual and linguistic encoding operations. Native speakers of Tzeltal [a primarily verb–object–subject (VOS) language] and Dutch [a subject–verb–object (SVO) language] described pictures of transitive events. Analyses compared speakers' choice of sentence structure across events with more accessible and less accessible characters as well as the time course of formulation for sentences with different word orders. Character accessibility influenced subject selection in both languages in subject-initial and subject-final sentences, ruling against a radically incremental formulation process. In Tzeltal, subject-initial word orders were preferred over verb-initial orders when event characters had matching animacy features, suggesting a possible role for similarity-based interference in influencing word order choice. Time course analyses revealed a strong effect of sentence structure on formulation: In subject-initial sentences, in both Tzeltal and Dutch, event characters were largely fixated sequentially, while in verb-initial sentences in Tzeltal, relational information received priority over encoding of either character during the earliest stages of formulation. The results show a tight parallelism between grammatical structure and the order of encoding operations carried out during sentence formulation.
  • Norris, D., McQueen, J. M., & Cutler, A. (1995). Competition and segmentation in spoken word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 1209-1228.

    Abstract

    Spoken utterances contain few reliable cues to word boundaries, but listeners nonetheless experience little difficulty identifying words in continuous speech. The authors present data and simulations that suggest that this ability is best accounted for by a model of spoken-word recognition combining competition between alternative lexical candidates and sensitivity to prosodic structure. In a word-spotting experiment, stress pattern effects emerged most clearly when there were many competing lexical candidates for part of the input. Thus, competition between simultaneously active word candidates can modulate the size of prosodic effects, which suggests that spoken-word recognition must be sensitive both to prosodic structure and to the effects of competition. A version of the Shortlist model ( D. G. Norris, 1994b) incorporating the Metrical Segmentation Strategy ( A. Cutler & D. Norris, 1988) accurately simulates the results using a lexicon of more than 25,000 words.
  • Norris, D., & Cutler, A. (1985). Juncture detection. Linguistics, 23, 689-705.
  • Obleser, J., & Eisner, F. (2009). Pre-lexical abstraction of speech in the auditory cortex. Trends in Cognitive Sciences, 13, 14-19. doi:10.1016/j.tics.2008.09.005.

    Abstract

    Speech perception requires the decoding of complex acoustic patterns. According to most cognitive models of spoken word recognition, this complexity is dealt with before lexical access via a process of abstraction from the acoustic signal to pre-lexical categories. It is currently unclear how these categories are implemented in the auditory cortex. Recent advances in animal neurophysiology and human functional imaging have made it possible to investigate the processing of speech in terms of probabilistic cortical maps rather than simple cognitive subtraction, which will enable us to relate neurometric data more directly to behavioural studies. We suggest that integration of insights from cognitive science, neurophysiology and functional imaging is necessary for furthering our understanding of pre-lexical abstraction in the cortex.

    Files private

    Request files
  • Ogasawara, N., & Warner, N. (2009). Processing missing vowels: Allophonic processing in Japanese. Language and Cognitive Processes, 24, 376 -411. doi:10.1080/01690960802084028.

    Abstract

    The acoustic realisation of a speech sound varies, often showing allophonic variation triggered by surrounding sounds. Listeners recognise words and sounds well despite such variation, and even make use of allophonic variability in processing. This study reports five experiments on processing of the reduced/unreduced allophonic alternation of Japanese high vowels. The results show that listeners use phonological knowledge of their native language during phoneme processing and word recognition. However, interactions of the phonological and acoustic effects differ in these two processes. A facilitatory phonological effect and an inhibitory acoustic effect cancel one another out in phoneme processing; while in word recognition, the facilitatory phonological effect overrides the inhibitory acoustic effect. Four potential models of the processing of allophonic variation are discussed. The results can be accommodated in two of them, but require additional assumptions or modifications to the models, and primarily support lexical specification of allophonic variability.

    Files private

    Request files
  • Orfanidou, E., Adam, R., McQueen, J. M., & Morgan, G. (2009). Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition. Memory & Cognition, 37(3), 302-315. doi:10.3758/MC.37.3.302.

    Abstract

    Do all components of a sign contribute equally to its recognition? In the present study, misperceptions in the sign-spotting task (based on the word-spotting task; Cutler & Norris, 1988) were analyzed to address this question. Three groups of deaf signers of British Sign Language (BSL) with different ages of acquisition (AoA) saw BSL signs combined with nonsense signs, along with combinations of two nonsense signs. They were asked to spot real signs and report what they had spotted. We will present an analysis of false alarms to the nonsense-sign combinations—that is, misperceptions of nonsense signs as real signs (cf. van Ooijen, 1996). Participants modified the movement and handshape parameters more than the location parameter. Within this pattern, however, there were differences as a function of AoA. These results show that the theoretical distinctions between form-based parameters in sign-language models have consequences for online processing. Vowels and consonants have different roles in speech recognition; similarly, it appears that movement, handshape, and location parameters contribute differentially to sign recognition.
  • Orfanidou, E., McQueen, J. M., Adam, R., & Morgan, G. (2015). Segmentation of British Sign Language (BSL): Mind the gap! Quarterly journal of experimental psychology, 68, 641-663. doi:10.1080/17470218.2014.945467.

    Abstract

    This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms
  • Ortega, G., & Morgan, G. (2015). Input processing at first exposure to a sign language. Second language research, 19(10), 443-463. doi:10.1177/0267658315576822.

    Abstract

    There is growing interest in learners’ cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back on their L1 to process novel signs because the modality differences between speech (aural–oral) and sign (visual-manual) do not allow for direct cross-linguistic influence. Sign language learners might use alternative strategies to process input expressed in the manual channel. Learners may rely on iconicity, the direct relationship between a sign and its referent. Evidence up to now has shown that iconicity facilitates learning in non-signers, but it is unclear whether it also facilitates sign production. In order to fill this gap, the present study investigated how iconicity influenced articulation of the phonological components of signs. In Study 1, hearing non-signers viewed a set of iconic and arbitrary signs along with their English translations and repeated the signs as accurately as possible immediately after. The results show that participants imitated iconic signs significantly less accurately than arbitrary signs. In Study 2, a second group of hearing non-signers imitated the same set of signs but without the accompanying English translations. The same lower accuracy for iconic signs was observed. We argue that learners rely on iconicity to process manual input because it brings familiarity to the target (sign) language. However, this reliance comes at a cost as it leads to a more superficial processing of the signs’ full phonetic form. The present findings add to our understanding of learners’ cognitive capacities at first exposure to a signed L2, and raises new theoretical questions in the field of second language acquisition
  • Ortega, G., & Morgan, G. (2015). Phonological development in hearing learners of a sign language: The role of sign complexity and iconicity. Language Learning, 65(3), 660-668. doi:10.1111/lang.12123.

    Abstract

    The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that training improves articulation accuracy and that some sign components are produced more accurately than others: Handshape was the most difficult, followed by movement, then orientation, and finally location. Iconic signs were articulated less accurately than arbitrary signs because the direct sign-referent mappings and perhaps their similarity with iconic co-speech gestures prevented learners from focusing on the exact phonological structure of the sign. This study shows that multiple phonological features pose greater demand on the production of the parameters of signs and that iconicity interferes in the exact articulation of their constituents
  • Ortega, G., & Morgan, G. (2015). The effect of sign iconicity in the mental lexicon of hearing non-signers and proficient signers: Evidence of cross-modal priming. Language, Cognition and Neuroscience, 30(5), 574-585. doi:10.1080/23273798.2014.959533.

    Abstract

    The present study investigated the priming effect of iconic signs in the mental lexicon of hearing adults. Non-signers and proficient British Sign Language (BSL) users took part in a cross-modal lexical decision task. The results indicate that iconic signs activated semantically related words in non-signers' lexicon. Activation occurred regardless of the type of referent because signs depicting actions and perceptual features of an object yielded the same response times. The pattern of activation was different in proficient signers because only action signs led to cross-modal activation. We suggest that non-signers process iconicity in signs in the same way as they do gestures, but after acquiring a sign language, there is a shift in the mechanisms used to process iconic manual structures
  • Otten, M., & Van Berkum, J. J. A. (2009). Does working memory capacity affect the ability to predict upcoming words in discourse? Brain Research, 1291, 92-101. doi:doi:10.1016/j.brainres.2009.07.042.

    Abstract

    Prior research has indicated that readers and listeners can use information in the prior discourse to rapidly predict specific upcoming words, as the text is unfolding. Here we used event-related potentials to explore whether the ability to make rapid online predictions depends on a reader's working memory capacity (WMC). Readers with low WMC were hypothesized to differ from high WMC readers either in their overall capability to make predictions (because of their lack of cognitive resources). High and low WMC participants read highly constraining stories that supported the prediction of a specific noun, mixed with coherent but essentially unpredictive ‘prime control’ control stories that contained the same content words as the predictive stories. To test whether readers were anticipating upcoming words, critical nouns were preceded by a determiner whose gender agreed or disagreed with the gender of the expected noun. In predictive stories, both high and low WMC readers displayed an early negative deflection (300–600 ms) to unexpected determiners, which was not present in prime control stories. Only the low WMC participants displayed an additional later negativity (900–1500 ms) to unexpected determiners. This pattern of results suggests that WMC does not influence the ability to anticipate upcoming words per se, but does change the way in which readers deal with information that disconfirms the generated prediction.
  • Ozyurek, A., Furman, R., & Goldin-Meadow, S. (2015). On the way to language: Event segmentation in homesign and gesture. Journal of Child Language, 42, 64-94. doi:10.1017/S0305000913000512.

    Abstract

    Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological and kinematic correlates of communicative intent in the planning and production of pointing gestures and speech. Journal of Cognitive Neuroscience, 27(12), 2352-2368. doi:10.1162/jocn_a_00865.

    Abstract

    In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.
  • Peeters, D., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological evidence for the role of shared space in online comprehension of spatial demonstratives. Cognition, 136, 64-84. doi:10.1016/j.cognition.2014.10.010.

    Abstract

    A fundamental property of language is that it can be used to refer to entities in the extra-linguistic physical context of a conversation in order to establish a joint focus of attention on a referent. Typological and psycholinguistic work across a wide range of languages has put forward at least two different theoretical views on demonstrative reference. Here we contrasted and tested these two accounts by investigating the electrophysiological brain activity underlying the construction of indexical meaning in comprehension. In two EEG experiments, participants watched pictures of a speaker who referred to one of two objects using speech and an index-finger pointing gesture. In contrast with separately collected native speakers’ linguistic intuitions, N400 effects showed a preference for a proximal demonstrative when speaker and addressee were in a face-to-face orientation and all possible referents were located in the shared space between them, irrespective of the physical proximity of the referent to the speaker. These findings reject egocentric proximity-based accounts of demonstrative reference, support a sociocentric approach to deixis, suggest that interlocutors construe a shared space during conversation, and imply that the psychological proximity of a referent may be more important than its physical proximity.
  • Perdue, C., & Klein, W. (1992). Why does the production of some learners not grammaticalize? Studies in Second Language Acquisition, 14, 259-272. doi:10.1017/S0272263100011116.

    Abstract

    In this paper we follow two beginning learners of English, Andrea and Santo, over a period of 2 years as they develop means to structure the declarative utterances they produce in various production tasks, and then we look at the following problem: In the early stages of acquisition, both learners develop a common learner variety; during these stages, we see a picture of two learner varieties developing similar regularities determined by the minimal requirements of the tasks we examine. Andrea subsequently develops further morphosyntactic means to achieve greater cohesion in his discourse. But Santo does not. Although we can identify contexts where the grammaticalization of Andrea's production allows him to go beyond the initial constraints of his variety, it is much more difficult to ascertain why Santo, faced with the same constraints in the same contexts, does not follow this path. Some lines of investigation into this problem are then suggested.
  • Perlman, M., Clark, N., & Falck, M. J. (2015). Iconic prosody in story reading. Cognitive Science, 39(6), 1348-1368. doi:10.1111/cogs.12190.

    Abstract

    Recent experiments have shown that people iconically modulate their prosody corresponding with the meaning of their utterance (e.g., Shintel et al., 2006). This article reports findings from a story reading task that expands the investigation of iconic prosody to abstract meanings in addition to concrete ones. Participants read stories that contrasted along concrete and abstract semantic dimensions of speed (e.g., a fast drive, slow career progress) and size (e.g., a small grasshopper, an important contract). Participants read fast stories at a faster rate than slow stories, and big stories with a lower pitch than small stories. The effect of speed was distributed across the stories, including portions that were identical across stories, whereas the size effect was localized to size-related words. Overall, these findings enrich the documentation of iconicity in spoken language and bear on our understanding of the relationship between gesture and speech.
  • Perlman, M., Dale, R., & Lupyan, G. (2015). Iconicity can ground the creation of vocal symbols. Royal Society Open Science, 2: 150152. doi:10.1098/rsos.150152.

    Abstract

    Studies of gestural communication systems find that they originate from spontaneously created iconic gestures. Yet, we know little about how people create vocal communication systems, and many have suggested that vocalizations do not afford iconicity beyond trivial instances of onomatopoeia. It is unknown whether people can generate vocal communication systems through a process of iconic creation similar to gestural systems. Here, we examine the creation and development of a rudimentary vocal symbol system in a laboratory setting. Pairs of participants generated novel vocalizations for 18 different meanings in an iterative ‘vocal’ charades communication game. The communicators quickly converged on stable vocalizations, and naive listeners could correctly infer their meanings in subsequent playback experiments. People's ability to guess the meanings of these novel vocalizations was predicted by how close the vocalization was to an iconic ‘meaning template’ we derived from the production data. These results strongly suggest that the meaningfulness of these vocalizations derived from iconicity. Our findings illuminate a mechanism by which iconicity can ground the creation of vocal symbols, analogous to the function of iconicity in gestural communication systems.
  • Perlman, M., & Clark, N. (2015). Learned vocal and breathing behavior in an enculturated gorilla. Animal Cognition, 18(5), 1165-1179. doi:10.1007/s10071-015-0889-6.

    Abstract

    We describe the repertoire of learned vocal and breathing-related behaviors (VBBs) performed by the enculturated gorilla Koko. We examined a large video corpus of Koko and observed 439 VBBs spread across 161 bouts. Our analysis shows that Koko exercises voluntary control over the performance of nine distinctive VBBs, which involve variable coordination of her breathing, larynx, and supralaryngeal articulators like the tongue and lips. Each of these behaviors is performed in the context of particular manual action routines and gestures. Based on these and other findings, we suggest that vocal learning and the ability to exercise volitional control over vocalization, particularly in a multimodal context, might have figured relatively early into the evolution of language, with some rudimentary capacity in place at the time of our last common ancestor with great apes.
  • Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2015). Does space structure spatial language? A comparison of spatial expression across sign languages. Language, 91(3), 611-641.

    Abstract

    The spatial affordances of the visual modality give rise to a high degree of similarity between sign languages in the spatial domain. This stands in contrast to the vast structural and semantic diversity in linguistic encoding of space found in spoken languages. However, the possibility and nature of linguistic diversity in spatial encoding in sign languages has not been rigorously investigated by systematic crosslinguistic comparison. Here, we compare locative expression in two unrelated sign languages, Turkish Sign Language (Türk İşaret Dili, TİD) and German Sign Language (Deutsche Gebärdensprache, DGS), focusing on the expression of figure-ground (e.g. cup on table) and figure-figure (e.g. cup next to cup) relationships in a discourse context. In addition to similarities, we report qualitative and quantitative differences between the sign languages in the formal devices used (i.e. unimanual vs. bimanual; simultaneous vs. sequential) and in the degree of iconicity of the spatial devices. Our results suggest that sign languages may display more diversity in the spatial domain than has been previously assumed, and in a way more comparable with the diversity found in spoken languages. The study contributes to a more comprehensive understanding of how space gets encoded in language
  • Perniss, P. M., Ozyurek, A., & Morgan, G. (2015). The Influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture. Topics in Cognitive Science, 7(1), 2-11. doi:10.1111/tops.12127.

    Abstract

    For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems.
  • Perniss, P. M., Ozyurek, A., & Morgan, G. (Eds.). (2015). The influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture [Special Issue]. Topics in Cognitive Science, 7(1). doi:10.1111/tops.12113.
  • Perniss, P. M., & Ozyurek, A. (2015). Visible cohesion: A comparison of reference tracking in sign, speech, and co-speech gesture. Topics in Cognitive Science, 7(1), 36-60. doi:10.1111/tops.12122.

    Abstract

    Establishing and maintaining reference is a crucial part of discourse. In spoken languages, differential linguistic devices mark referents occurring in different referential contexts, that is, introduction, maintenance, and re-introduction contexts. Speakers using gestures as well as users of sign languages have also been shown to mark referents differentially depending on the referential context. This article investigates the modality-specific contribution of the visual modality in marking referential context by providing a direct comparison between sign language (German Sign Language; DGS) and co-speech gesture with speech (German) in elicited narratives. Across all forms of expression, we find that referents in subject position are referred to with more marking material in re-introduction contexts compared to maintenance contexts. Furthermore, we find that spatial modification is used as a modality-specific strategy in both DGS and German co-speech gesture, and that the configuration of referent locations in sign space and gesture space corresponds in an iconic and consistent way to the locations of referents in the narrated event. However, we find that spatial modification is used in different ways for marking re-introduction and maintenance contexts in DGS and German co-speech gesture. The findings are discussed in relation to the unique contribution of the visual modality to reference tracking in discourse when it is used in a unimodal system with full linguistic structure (i.e., as in sign) versus in a bimodal system that is a composite of speech and gesture
  • Perry, L. K., Perlman, M., & Lupyan, G. (2015). Iconicity in English and Spanish and Its Relation to Lexical Category and Age of Acquisition. PLoS One, 10(9): e0137147. doi:10.1371/journal.pone.0137147.

    Abstract

    Signed languages exhibit iconicity (resemblance between form and meaning) across their vocabulary, and many non-Indo-European spoken languages feature sizable classes of iconic words known as ideophones. In comparison, Indo-European languages like English and Spanish are believed to be arbitrary outside of a small number of onomatopoeic words. In three experiments with English and two with Spanish, we asked native speakers to rate the iconicity of ~600 words from the English and Spanish MacArthur-Bates Communicative Developmental Inventories. We found that iconicity in the words of both languages varied in a theoretically meaningful way with lexical category. In both languages, adjectives were rated as more iconic than nouns and function words, and corresponding to typological differences between English and Spanish in verb semantics, English verbs were rated as relatively iconic compared to Spanish verbs. We also found that both languages exhibited a negative relationship between iconicity ratings and age of acquisition. Words learned earlier tended to be more iconic, suggesting that iconicity in early vocabulary may aid word learning. Altogether these findings show that iconicity is a graded quality that pervades vocabularies of even the most “arbitrary” spoken languages. The findings provide compelling evidence that iconicity is an important property of all languages, signed and spoken, including Indo-European languages.

    Additional information

    1536057.zip
  • Peter, M., Chang, F., Pine, J. M., Blything, R., & Rowland, C. F. (2015). When and how do children develop knowledge of verb argument structure? Evidence from verb bias effects in a structural priming task. Journal of Memory and Language, 81, 1-15. doi:10.1016/j.jml.2014.12.002.

    Abstract

    In this study, we investigated when children develop adult-like verb–structure links, and examined two mechanisms, associative and error-based learning, that might explain how these verb–structure links are learned. Using structural priming, we tested children’s and adults’ ability to use verb–structure links in production in three ways; by manipulating: (1) verb overlap between prime and target, (2) target verb bias, and (3) prime verb bias. Children (aged 3–4 and 5–6 years old) and adults heard and produced double object dative (DOD) and prepositional object dative (PD) primes with DOD- and PD-biased verbs. Although all age groups showed significant evidence of structural priming, only adults showed increased priming when there was verb overlap between prime and target sentences (the lexical boost). The effect of target verb bias also grew with development. Critically, however, the effect of prime verb bias on the size of the priming effect (prime surprisal) was larger in children than in adults, suggesting that verb–structure links are present at the earliest age tested. Taken as a whole, the results suggest that children begin to acquire knowledge about verb-argument structure preferences early in acquisition, but that the ability to use adult-like verb bias in production gradually improves over development. We also argue that this pattern of results is best explained by a learning model that uses an error-based learning mechanism.
  • Pettigrew, K. A., Fajutrao Valles, S. F., Moll, K., Northstone, K., Ring, S., Pennell, C., Wang, C., Leavett, R., Hayiou-Thomas, M. E., Thompson, P., Simpson, N. H., Fisher, S. E., The SLI Consortium, Whitehouse, A. J., Snowling, M. J., Newbury, D. F., & Paracchini, S. (2015). Lack of replication for the myosin-18B association with mathematical ability in independent cohorts. Genes, Brain and Behavior, 14(4), 369-376. doi:10.1111/gbb.12213.

    Abstract

    Twin studies indicate that dyscalculia (or mathematical disability) is caused partly by a genetic component, which is yet to be understood at the molecular level. Recently, a coding variant (rs133885) in the myosin-18B gene was shown to be associated with mathematical abilities with a specific effect among children with dyslexia. This association represents one of the most significant genetic associations reported to date for mathematical abilities and the only one reaching genome-wide statistical significance.

    We conducted a replication study in different cohorts to assess the effect of rs133885 maths-related measures. The study was conducted primarily using the Avon Longitudinal Study of Parents and Children (ALSPAC), (N = 3819). We tested additional cohorts including the York Cohort, the Specific Language Impairment Consortium (SLIC) cohort and the Raine Cohort, and stratified them for a definition of dyslexia whenever possible.

    We did not observe any associations between rs133885 in myosin-18B and mathematical abilities among individuals with dyslexia or in the general population. Our results suggest that the myosin-18B variant is unlikely to be a main factor contributing to mathematical abilities.
  • Piai, V., Roelofs, A., Rommers, J., & Maris, E. (2015). Beta oscillations reflect memory and motor aspects of spoken word production. Human brain mapping, 36(7), 2767-2780. doi:10.1002/hbm.22806.

    Abstract

    Two major components form the basis of spoken word production: the access of conceptual and lexical/phonological information in long-term memory, and motor preparation and execution of an articulatory program. Whereas the motor aspects of word production have been well characterized as reflected in alpha-beta desynchronization, the memory aspects have remained poorly understood. Using magnetoencephalography, we investigated the neurophysiological signature of not only motor but also memory aspects of spoken-word production. Participants named or judged pictures after reading sentences. To probe the involvement of the memory component, we manipulated sentence context. Sentence contexts were either constraining or nonconstraining toward the final word, presented as a picture. In the judgment task, participants indicated with a left-hand button press whether the picture was expected given the sentence. In the naming task, they named the picture. Naming and judgment were faster with constraining than nonconstraining contexts. Alpha-beta desynchronization was found for constraining relative to nonconstraining contexts pre-picture presentation. For the judgment task, beta desynchronization was observed in left posterior brain areas associated with conceptual processing and in right motor cortex. For the naming task, in addition to the same left posterior brain areas, beta desynchronization was found in left anterior and posterior temporal cortex (associated with memory aspects), left inferior frontal cortex, and bilateral ventral premotor cortex (associated with motor aspects). These results suggest that memory and motor components of spoken word production are reflected in overlapping brain oscillations in the beta band.

    Additional information

    hbm22806-sup-0001-suppinfo1.docx

    Files private

    Request files
  • Piai, V., Roelofs, A., & Roete, I. (2015). Semantic interference in picture naming during dual-task performance does not vary with reading ability. Quarterly Journal of Experimental Psychology, 68(9), 1758-68. doi:10.1080/17470218.2014.985689.

    Abstract

    Previous dual-task studies examining the locus of semantic interference of distractor words in picture naming have obtained diverging results. In these studies, participants manually responded to tones and named pictures while ignoring distractor words (picture-word interference, PWI) with varying stimulus onset asynchrony (SOA) between tone and PWI stimulus. Whereas some studies observed no semantic interference at short SOAs, other studies observed effects of similar magnitude at short and long SOAs. The absence of semantic interference in some studies may perhaps be due to better reading skill of participants in these than in the other studies. According to such a reading-ability account, participants' reading skill should be predictive of the magnitude of their interference effect at short SOAs. To test this account, we conducted a dual-task study with tone discrimination and PWI tasks and measured participants' reading ability. The semantic interference effect was of similar magnitude at both short and long SOAs. Participants' reading ability was predictive of their naming speed but not of their semantic interference effect, contrary to the reading ability account. We conclude that the magnitude of semantic interference in picture naming during dual-task performance does not depend on reading skill.
  • Pijls, F., Daelemans, W., & Kempen, G. (1987). Artificial intelligence tools for grammar and spelling instruction. Instructional Science, 16(4), 319-336. doi:10.1007/BF00117750.

    Abstract

    In The Netherlands, grammar teaching is an especially important subject in the curriculum of children aged 10-15 for several reasons. However, in spite of all attention and time invested, the results are poor. This article describes the problems and our attempt to overcome them by developing an intelligent computational instructional environment consisting of: a linguistic expert system, containing a module representing grammar and spelling rules and a number of modules to manipulate these rules; a didactic module; and a student interface with special facilities for grammar and spelling. Three prototypes of the functionality are discussed: BOUWSTEEN and COGO, which are programs for constructing and analyzing Dutch sentences; and TDTDT, a program for the conjugation of Dutch verbs.
  • Pijls, F., & Kempen, G. (1987). Kennistechnologische leermiddelen in het grammatica- en spellingonderwijs. Nederlands Tijdschrift voor de Psychologie, 42, 354-363.
  • Pijnacker, J., Geurts, B., Van Lambalgen, M., Kan, C. C., Buitelaar, J. K., & Hagoort, P. (2009). Defeasible reasoning in high-functioning adults with autism: Evidence for impaired exception-handling. Neuropsychologia, 47, 644-651. doi:10.1016/j.neuropsychologia.2008.11.011.

    Abstract

    While autism is one of the most intensively researched psychiatric disorders, little is known about reasoning skills of people with autism. The focus of this study was on defeasible inferences, that is inferences that can be revised in the light of new information. We used a behavioral task to investigate (a) conditional reasoning and (b) the suppression of conditional inferences in high-functioning adults with autism. In the suppression task a possible exception was made salient which could prevent a conclusion from being drawn. We predicted that the autism group would have difficulties dealing with such exceptions because they require mental flexibility to adjust to the context, which is often impaired in autism. The findings confirm our hypothesis that high-functioning adults with autism have a specific difficulty with exception-handling during reasoning. It is suggested that defeasible reasoning is also involved in other cognitive domains. Implications for neural underpinnings of reasoning and autism are discussed.
  • Pijnacker, J., Hagoort, P., Buitelaar, J., Teunisse, J.-P., & Geurts, B. (2009). Pragmatic inferences in high-functioning adults with autism and Asperger syndrome. Journal of Autism and Developmental Disorders, 39(4), 607-618. doi:10.1007/s10803-008-0661-8.

    Abstract

    Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on highfunctioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they are capable of deriving scalar implicatures, which are generally considered to be pragmatic inferences. Participants were presented with underinformative sentences like ‘‘Some sparrows are birds’’. This sentence is logically true, but pragmatically inappropriate if the scalar implicature ‘‘Not all sparrows are birds’’ is derived. The present findings indicate that the combined ASD group was just as likely as controls to derive scalar implicatures, yet there was a difference between participants with autistic disorder and Asperger syndrome, suggesting a potential differentiation between these disorders in pragmatic reasoning. Moreover, our results suggest that verbal intelligence is a constraint for task performance in autistic disorder but not in Asperger syndrome.
  • Plomp, G., Hervais-Adelman, A., Astolfi, L., & Michel, C. M. (2015). Early recurrence and ongoing parietal driving during elementary visual processing. Scientific Reports, 5: 18733. doi:10.1038/srep18733.

    Abstract

    Visual stimuli quickly activate a broad network of brain areas that often show reciprocal structural connections between them. Activity at short latencies (<100 ms) is thought to represent a feed-forward activation of widespread cortical areas, but fast activation combined with reciprocal connectivity between areas in principle allows for two-way, recurrent interactions to occur at short latencies after stimulus onset. Here we combined EEG source-imaging and Granger-causal modeling with high temporal resolution to investigate whether recurrent and top-down interactions between visual and attentional brain areas can be identified and distinguished at short latencies in humans. We investigated the directed interactions between widespread occipital, parietal and frontal areas that we localized within participants using fMRI. The connectivity results showed two-way interactions between area MT and V1 already at short latencies. In addition, the results suggested a large role for lateral parietal cortex in coordinating visual activity that may be understood as an ongoing top-down allocation of attentional resources. Our results support the notion that indirect pathways allow early, evoked driving from MT to V1 to highlight spatial locations of motion transients, while influence from parietal areas is continuously exerted around stimulus onset, presumably reflecting task-related attentional processes.
  • Poletiek, F. H., & Van Schijndel, T. J. P. (2009). Stimulus set size and statistical coverage of the grammar in artificial grammar learning. Psychonomic Bulletin & Review, 16(6), 1058-1064. doi:10.3758/PBR.16.6.1058.

    Abstract

    Adults and children acquire knowledge of the structure of their environment on the basis of repeated exposure to samples of structured stimuli. In the study of inductive learning, a straightforward issue is how much sample information is needed to learn the structure. The present study distinguishes between two measures for the amount of information in the sample: set size and the extent to which the set of exemplars statistically covers the underlying structure. In an artificial grammar learning experiment, learning was affected by the sample’s statistical coverage of the grammar, but not by its mere size. Our result suggests an alternative explanation of the set size effects on learning found in previous studies (McAndrews & Moscovitch, 1985; Meulemans & Van der Linden, 1997), because, as we argue, set size was confounded with statistical coverage in these studies. nt]mis|This research was supported by a grant from the Netherlands Organization for Scientific Research. We thank Jarry Porsius for his help with the data analyses.
  • Poletiek, F. H. (2009). Popper's Severity of Test as an intuitive probabilistic model of hypothesis testing. Behavioral and Brain Sciences, 32(1), 99-100. doi:10.1017/S0140525X09000454.
  • Poletiek, F. H., & Wolters, G. (2009). What is learned about fragments in artificial grammar learning? A transitional probabilities approach. Quarterly Journal of Experimental Psychology, 62(5), 868-876. doi:10.1080/17470210802511188.

    Abstract

    Learning local regularities in sequentially structured materials is typically assumed to be based on encoding of the frequencies of these regularities. We explore the view that transitional probabilities between elements of chunks, rather than frequencies of chunks, may be the primary factor in artificial grammar learning (AGL). The transitional probability model (TPM) that we propose is argued to provide an adaptive and parsimonious strategy for encoding local regularities in order to induce sequential structure from an input set of exemplars of the grammar. In a variant of the AGL procedure, in which participants estimated the frequencies of bigrams occurring in a set of exemplars they had been exposed to previously, participants were shown to be more sensitive to local transitional probability information than to mere pattern frequencies.
  • St Pourcain, B., Haworth, C. M. A., Davis, O. S. P., Wang, K., Timpson, N. J., Evans, D. M., Kemp, J. P., Ronald, A., Price, T., Meaburn, E., Ring, S. M., Golding, J., Hakonarson, H., Plomin, R., & Davey Smith, G. (2015). Heritability and genome-wide analyses of problematic peer relationships during childhood and adolescence. Human Genetics, 134(6), 539-551. doi:10.1007/s00439-014-1514-5.

    Abstract

    Peer behaviour plays an important role in the development of social adjustment, though little is known about its genetic architecture. We conducted a twin study combined with a genome-wide complex trait analysis (GCTA) and a genome-wide screen to characterise genetic influences on problematic peer behaviour during childhood and adolescence. This included a series of longitudinal measures (parent-reported Strengths-and-Difficulties Questionnaire) from a UK population-based birth-cohort (ALSPAC, 4–17 years), and a UK twin sample (TEDS, 4–11 years). Longitudinal twin analysis (TEDS; N ≤ 7,366 twin pairs) showed that peer problems in childhood are heritable (4–11 years, 0.60 < twin-h 2 ≤ 0.71) but genetically heterogeneous from age to age (4–11 years, twin-r g = 0.30). GCTA (ALSPAC: N ≤ 5,608, TEDS: N ≤ 2,691) provided furthermore little support for the contribution of measured common genetic variants during childhood (4–12 years, 0.02 < GCTA-h 2(Meta) ≤ 0.11) though these influences become stronger in adolescence (13–17 years, 0.14 < GCTA-h 2(ALSPAC) ≤ 0.27). A subsequent cross-sectional genome-wide screen in ALSPAC (N ≤ 6,000) focussed on peer problems with the highest GCTA-heritability (10, 13 and 17 years, 0.0002 < GCTA-P ≤ 0.03). Single variant signals (P ≤ 10−5) were followed up in TEDS (N ≤ 2835, 9 and 11 years) and, in search for autism quantitative trait loci, explored within two autism samples (AGRE: N Pedigrees = 793; ACC: N Cases = 1,453/N Controls = 7,070). There was, however, no evidence for association in TEDS and little evidence for an overlap with the autistic continuum. In summary, our findings suggest that problematic peer relationships are heritable but genetically complex and heterogeneous from age to age, with an increase in common measurable genetic variation during adolescence.
  • Pouw, W., & Looren de Jong, H. (2015). Rethinking situated and embodied social psychology. Theory and Psychology, 25(4), 411-433. doi:10.1177/0959354315585661.

    Abstract

    This article aims to explore the scope of a Situated and Embodied Social Psychology (ESP). At first sight, social cognition seems embodied cognition par excellence. Social cognition is first and foremost a supra-individual, interactive, and dynamic process (Semin & Smith, 2013). Radical approaches in Situated/Embodied Cognitive Science (Enactivism) claim that social cognition consists in an emergent pattern of interaction between a continuously coupled organism and the (social) environment; it rejects representationalist accounts of cognition (Hutto & Myin, 2013). However, mainstream ESP (Barsalou, 1999, 2008) still takes a rather representation-friendly approach that construes embodiment in terms of specific bodily formatted representations used (activated) in social cognition. We argue that mainstream ESP suffers from vestiges of theoretical solipsism, which may be resolved by going beyond internalistic spirit that haunts mainstream ESP today.
  • Powlesland, A. S., Hitchen, P. G., Parry, S., Graham, S. A., Barrio, M. M., Elola, M. T., Mordoh, J., Dell, A., Drickamer, K., & Taylor, M. E. (2009). Targeted glycoproteomic identification of cancer cell glycosylation. Glycobiology, 9, 899-909. doi:10.1093/glycob/cwp065.

    Abstract

    GalMBP is a fragment of serum mannose-binding protein that has been modified to create a probe for galactose-containing ligands. Glycan array screening demonstrated that the carbohydrate-recognition domain of GalMBP selectively binds common groups of tumor-associated glycans, including Lewis-type structures and T antigen, suggesting that engineered glycan-binding proteins such as GalMBP represent novel tools for the characterization of glycoproteins bearing tumor-associated glycans. Blotting of cell extracts and membranes from MCF7 breast cancer cells with radiolabeled GalMBP was used to demonstrate that it binds to a selected set of high molecular weight glycoproteins that could be purified from MCF7 cells on an affinity column constructed with GalMBP. Proteomic and glycomic analysis of these glycoproteins by mass spectrometry showed that they are forms of CD98hc that bear glycans displaying heavily fucosylated termini, including Lewis(x) and Lewis(y) structures. The pool of ligands was found to include the target ligands for anti-CD15 antibodies, which are commonly used to detect Lewis(x) antigen on tumors, and for the endothelial scavenger receptor C-type lectin, which may be involved in tumor metastasis through interactions with this antigen. A survey of additional breast cancer cell lines reveals that there is wide variation in the types of glycosylation that lead to binding of GalMBP. Higher levels of binding are associated either with the presence of outer-arm fucosylated structures carried on a variety of different cell surface glycoproteins or with the presence of high levels of the mucin MUC1 bearing T antigen.

    Additional information

    Powlesland_2009_Suppl_Mat.pdf
  • Praamstra, P., Hagoort, P., Maassen, B., & Crul, T. (1991). Word deafness and auditory cortical function: A case history and hypothesis. Brain, 114, 1197-1225. doi:10.1093/brain/114.3.1197.

    Abstract

    A patient who already had Wernick's aphasia due to a left temporal lobe lesion suffered a severe deterioration specifically of auditory language comprehension, subsequent to right temporal lobe infarction. A detailed comparison of his new condition with his language status before the second stroke revealed that the newly acquired deficit was limited to tasks related to auditory input. Further investigations demonstrated a speech perceptual disorder, which we analysed as due to deficits both at the level of general auditory processes and at the level of phonetic analysis. We discuss some arguments related to hemisphere specialization of phonetic processing and to the disconnection explanation of word deafness that support the hypothesis of word deafness being generally caused by mixed deficits.
  • Protopapas, A., & Gerakaki, S. (2009). Development of processing stress diacritics in reading Greek. Scientific Studies of Reading, 13(6), 453-483. doi:10.1080/10888430903034788.

    Abstract

    In Greek orthography, stress position is marked with a diacritic. We investigated the developmental course of processing the stress diacritic in Grades 2 to 4. Ninety children read 108 pseudowords presented without or with a diacritic either in the same or in a different position relative to the source word. Half of the pseudowords resembled the words they were derived from. Results showed that lexical sources of stress assignment were active in Grade 2 and remained stronger than the diacritic through Grade 4. The effect of the diacritic increased more rapidly and approached the lexical effect with increasing grade. In a second experiment, 90 children read 54 words and 54 pseudowords. The pattern of results for words was similar to that for nonwords suggesting that findings regarding stress assignment using nonwords may generalize to word reading. Decoding of the diacritic does not appear to be the preferred option for developing readers.
  • Pylkkänen, L., Martin, A. E., McElree, B., & Smart, A. (2009). The Anterior Midline Field: Coercion or decision making? Brain and Language, 108(3), 184-190. doi:10.1016/j.bandl.2008.06.006.

    Abstract

    To study the neural bases of semantic composition in language processing without confounds from syntactic composition, recent magnetoencephalography (MEG) studies have investigated the processing of constructions that exhibit some type of syntax-semantics mismatch. The most studied case of such a mismatch is complement coercion; expressions such as the author began the book, where an entity-denoting noun phrase is coerced into an eventive meaning in order to match the semantic properties of the event-selecting verb (e.g., ‘the author began reading/writing the book’). These expressions have been found to elicit increased activity in the Anterior Midline Field (AMF), an MEG component elicited at frontomedial sensors at ∼400 ms after the onset of the coercing noun [Pylkkänen, L., & McElree, B. (2007). An MEG study of silent meaning. Journal of Cognitive Neuroscience, 19, 11]. Thus, the AMF constitutes a potential neural correlate of coercion. However, the AMF was generated in ventromedial prefrontal regions, which are heavily associated with decision-making. This raises the possibility that, instead of semantic processing, the AMF effect may have been related to the experimental task, which was a sensicality judgment. We tested this hypothesis by assessing the effect of coercion when subjects were simply reading for comprehension, without a decision-task. Additionally, we investigated coercion in an adjectival rather than a verbal environment to further generalize the findings. Our results show that an AMF effect of coercion is elicited without a decision-task and that the effect also extends to this novel syntactic environment. We conclude that in addition to its role in non-linguistic higher cognition, ventromedial prefrontal regions contribute to the resolution of syntax-semantics mismatches in language processing.
  • Qin, S., Rijpkema, M., Tendolkar, I., Piekema, C., Hermans, E. J., Binder, M., Petersson, K. M., Luo, J., & Fernández, G. (2009). Dissecting medial temporal lobe contributions to item and associative memory formation. NeuroImage, 46, 874-881. doi:10.1016/j.neuroimage.2009.02.039.

    Abstract

    A fundamental and intensively discussed question is whether medial temporal lobe (MTL) processes that lead to non-associative item memories differ in their anatomical substrate from processes underlying associative memory formation. Using event-related functional magnetic resonance imaging, we implemented a novel design to dissociate brain activity related to item and associative memory formation not only by subsequent memory performance and anatomy but also in time, because the two constituents of each pair to be memorized were presented sequentially with an intra-pair delay of several seconds. Furthermore, the design enabled us to reduce potential differences in memory strength between item and associative memory by increasing task difficulty in the item recognition memory test. Confidence ratings for correct item recognition for both constituents did not differ between trials in which only item memory was correct and trials in which item and associative memory were correct. Specific subsequent memory analyses for item and associative memory formation revealed brain activity that appears selectively related to item memory formation in the posterior inferior temporal, posterior parahippocampal, and perirhinal cortices. In contrast, hippocampal and inferior prefrontal activity predicted successful retrieval of newly formed inter-item associations. Our findings therefore suggest that different MTL subregions indeed play distinct roles in the formation of item memory and inter-item associative memory as expected by several dual process models of the MTL memory system.
  • Ravignani, A., & Sonnweber, R. (2015). Measuring teaching through hormones and time series analysis: Towards a comparative framework. Behavioral and Brain Sciences, 38, 40-41. doi:10.1017/S0140525X14000806.

    Abstract


    In response to: How to learn about teaching: An evolutionary framework for the study of teaching behavior in humans and other animals

    Arguments about the nature of teaching have depended principally on naturalistic observation and some experimental work. Additional measurement tools, and physiological variations and manipulations can provide insights on the intrinsic structure and state of the participants better than verbal descriptions alone: namely, time-series analysis, and examination of the role of hormones and neuromodulators on the behaviors of teacher and pupil.
  • Ravignani, A., Westphal-Fitch, G., Aust, U., Schlumpp, M. M., & Fitch, W. T. (2015). More than one way to see it: Individual heuristics in avian visual computation. Cognition, 143, 13-24. doi:10.1016/j.cognition.2015.05.021.

    Abstract

    Comparative pattern learning experiments investigate how different species find regularities in sensory input, providing insights into cognitive processing in humans and other animals. Past research has focused either on one species’ ability to process pattern classes or different species’ performance in recognizing the same pattern, with little attention to individual and species-specific heuristics and decision strategies. We trained and tested two bird species, pigeons (Columba livia) and kea (Nestor notabilis, a parrot species), on visual patterns using touch-screen technology. Patterns were composed of several abstract elements and had varying degrees of structural complexity. We developed a model selection paradigm, based on regular expressions, that allowed us to reconstruct the specific decision strategies and cognitive heuristics adopted by a given individual in our task. Individual birds showed considerable differences in the number, type and heterogeneity of heuristic strategies adopted. Birds’ choices also exhibited consistent species-level differences. Kea adopted effective heuristic strategies, based on matching learned bigrams to stimulus edges. Individual pigeons, in contrast, adopted an idiosyncratic mix of strategies that included local transition probabilities and global string similarity. Although performance was above chance and quite high for kea, no individual of either species provided clear evidence of learning exactly the rule used to generate the training stimuli. Our results show that similar behavioral outcomes can be achieved using dramatically different strategies and highlight the dangers of combining multiple individuals in a group analysis. These findings, and our general approach, have implications for the design of future pattern learning experiments, and the interpretation of comparative cognition research more generally.

    Additional information

    Supplementary data
  • Ravignani, A. (2015). Evolving perceptual biases for antisynchrony: A form of temporal coordination beyond synchrony. Frontiers in Neuroscience, 9: 339. doi:10.3389/fnins.2015.00339.
  • Reesink, G., Singer, R., & Dunn, M. (2009). Explaining the linguistic diversity of Sahul using population models. PLoS Biology, 7(11), e1000241. doi:10.1371/journal.pbio.1000241.

    Abstract

    The region of the ancient Sahul continent (present day Australia and New Guinea, and surrounding islands) is home to extreme linguistic diversity. Even apart from the huge Austronesian language family, which spread into the area after the breakup of the Sahul continent in the Holocene, there are hundreds of languages from many apparently unrelated families. On each of the subcontinents, the generally accepted classification recognizes one large, widespread family and a number of unrelatable smaller families. If these language families are related to each other, it is at a depth which is inaccessible to standard linguistic methods. We have inferred the history of structural characteristics of these languages under an admixture model, using a Bayesian algorithm originally developed to discover populations on the basis of recombining genetic markers. This analysis identifies 10 ancestral language populations, some of which can be identified with clearly defined phylogenetic groups. The results also show traces of early dispersals, including hints at ancient connections between Australian languages and some Papuan groups (long hypothesized, never before demonstrated). Systematic language contact effects between members of big phylogenetic groups are also detected, which can in some cases be identified with a diffusional or substrate signal. Most interestingly, however, there remains striking evidence of a phylogenetic signal, with many languages showing negligible amounts of admixture.
  • Richards, J. B., Waterworth, D., O'Rahilly, S., Hivert, M.-F., Loos, R. J. F., Perry, J. R. B., Tanaka, T., Timpson, N. J., Semple, R. K., Soranzo, N., Song, K., Rocha, N., Grundberg, E., Dupuis, J., Florez, J. C., Langenberg, C., Prokopenko, I., Saxena, R., Sladek, R., Aulchenko, Y. and 47 moreRichards, J. B., Waterworth, D., O'Rahilly, S., Hivert, M.-F., Loos, R. J. F., Perry, J. R. B., Tanaka, T., Timpson, N. J., Semple, R. K., Soranzo, N., Song, K., Rocha, N., Grundberg, E., Dupuis, J., Florez, J. C., Langenberg, C., Prokopenko, I., Saxena, R., Sladek, R., Aulchenko, Y., Evans, D., Waeber, G., Erdmann, J., Burnett, M.-S., Sattar, N., Devaney, J., Willenborg, C., Hingorani, A., Witteman, J. C. M., Vollenweider, P., Glaser, B., Hengstenberg, C., Ferrucci, L., Melzer, D., Stark, K., Deanfield, J., Winogradow, J., Grassl, M., Hall, A. S., Egan, J. M., Thompson, J. R., Ricketts, S. L., König, I. R., Reinhard, W., Grundy, S., Wichmann, H.-E., Barter, P., Mahley, R., Kesaniemi, Y. A., Rader, D. J., Reilly, M. P., Epstein, S. E., Stewart, A. F. R., Van Duijn, C. M., Schunkert, H., Burling, K., Deloukas, P., Pastinen, T., Samani, N. J., McPherson, R., Davey Smith, G., Frayling, T. M., Wareham, N. J., Meigs, J. B., Mooser, V., Spector, T. D., & Consortium, G. (2009). A genome-wide association study reveals variants in ARL15 that influence adiponectin levels. PLoS Genetics, 5(12): e1000768. doi:10.1371/journal.pgen.1000768.

    Abstract

    The adipocyte-derived protein adiponectin is highly heritable and inversely associated with risk of type 2 diabetes mellitus (T2D) and coronary heart disease (CHD). We meta-analyzed 3 genome-wide association studies for circulating adiponectin levels (n = 8,531) and sought validation of the lead single nucleotide polymorphisms (SNPs) in 5 additional cohorts (n = 6,202). Five SNPs were genome-wide significant in their relationship with adiponectin (P<} or =5x10(-8)). We then tested whether these 5 SNPs were associated with risk of T2D and CHD using a Bonferroni-corrected threshold of P{< or =0.011 to declare statistical significance for these disease associations. SNPs at the adiponectin-encoding ADIPOQ locus demonstrated the strongest associations with adiponectin levels (P-combined = 9.2x10(-19) for lead SNP, rs266717, n = 14,733). A novel variant in the ARL15 (ADP-ribosylation factor-like 15) gene was associated with lower circulating levels of adiponectin (rs4311394-G, P-combined = 2.9x10(-8), n = 14,733). This same risk allele at ARL15 was also associated with a higher risk of CHD (odds ratio [OR] = 1.12, P = 8.5x10(-6), n = 22,421) more nominally, an increased risk of T2D (OR = 1.11, P = 3.2x10(-3), n = 10,128), and several metabolic traits. Expression studies in humans indicated that ARL15 is well-expressed in skeletal muscle. These findings identify a novel protein, ARL15, which influences circulating adiponectin levels and may impact upon CHD risk.
  • Rivero, O., Selten, M. M., Sich, S., Popp, S., Bacmeister, L., Amendola, E., Negwer, M., Schubert, D., Proft, F., Kiser, D., Schmitt, A. G., Gross, C., Kolk, S. M., Strekalova, T., van den Hove, D., Resink, T. J., Nadif Kasri, N., & Lesch, K. P. (2015). Cadherin-13, a risk gene for ADHD and comorbid disorders, impacts GABAergic function in hippocampus and cognition. Translational Psychiatry, 5: e655. doi:10.1038/tp.2015.152.

    Abstract

    Cadherin-13 (CDH13), a unique glycosylphosphatidylinositol-anchored member of the cadherin family of cell adhesion molecules, has been identified as a risk gene for attention-deficit/hyperactivity disorder (ADHD) and various comorbid neurodevelopmental and psychiatric conditions, including depression, substance abuse, autism spectrum disorder and violent behavior, while the mechanism whereby CDH13 dysfunction influences pathogenesis of neuropsychiatric disorders remains elusive. Here we explored the potential role of CDH13 in the inhibitory modulation of brain activity by investigating synaptic function of GABAergic interneurons. Cellular and subcellular distribution of CDH13 was analyzed in the murine hippocampus and a mouse model with a targeted inactivation of Cdh13 was generated to evaluate how CDH13 modulates synaptic activity of hippocampal interneurons and behavioral domains related to psychopathologic (endo)phenotypes. We show that CDH13 expression in the cornu ammonis (CA) region of the hippocampus is confined to distinct classes of interneurons. Specifically, CDH13 is expressed by numerous parvalbumin and somatostatin-expressing interneurons located in the stratum oriens, where it localizes to both the soma and the presynaptic compartment. Cdh13−/− mice show an increase in basal inhibitory, but not excitatory, synaptic transmission in CA1 pyramidal neurons. Associated with these alterations in hippocampal function, Cdh13−/− mice display deficits in learning and memory. Taken together, our results indicate that CDH13 is a negative regulator of inhibitory synapses in the hippocampus, and provide insights into how CDH13 dysfunction may contribute to the excitatory/inhibitory imbalance observed in neurodevelopmental disorders, such as ADHD and autism.
  • Roberts, S. G. (2015). Commentary: Large-scale psychological differences within China explained by rice vs. wheat agriculture. Frontiers in Psychology, 6: 950. doi:10.3389/fpsyg.2015.00950.

    Abstract

    Talhelm et al. (2014) test the hypothesis that activities which require more intensive collaboration foster more collectivist cultures. They demonstrate that a measure of collectivism correlates with the proportion of cultivated land devoted to rice paddies, which require more work to grow and maintain than other grains. The data come from individual measures of provinces in China. While the data is analyzed carefully, one aspect that is not directly controlled for is the historical relations between these provinces. Spurious correlations can occur between cultural traits that are inherited from ancestor cultures or borrowed through contact, what is commonly known as Galton's problem (Roberts and Winters, 2013). Effectively, Talhelm et al. treat the measures of each province as independent samples, while in reality both farming practices (e.g., Renfrew, 1997; Diamond and Bellwood, 2003; Lee and Hasegawa, 2011) and cultural values (e.g., Currie et al., 2010; Bulbulia et al., 2013) can be inherited or borrowed. This means that the data may be composed of non-independent points, inflating the apparent correlation between rice growing and collectivism. The correlation between farming practices and collectivism may be robust, but this cannot be known without an empirical control for the relatedness of the samples. Talhelm et al. do discuss this problem in the supplementary materials of their paper. They acknowledge that a phylogenetic analysis could be used to control for relatedness, but that at the time of publication there were no genetic or linguistic trees of descent which are detailed enough to suit this purpose. In this commentary I would like to make two points. First, since the original publication, researchers have created new linguistic trees that can provide the needed resolution. For example, the Glottolog phylogeny (Hammarström et al., 2015) has at least three levels of classification for the relevant varieties, though this does not have branch lengths (see also “reference” trees produced in List et al., 2014). Another recently published phylogeny uses lexical data to construct a phylogenetic tree for many language varieties within China (List et al., 2014). In this commentary I use these lexical data to estimate cultural contact between different provinces, and test whether these measures explain variation in rice farming pracices. However, the second point is that Talhelm et al. focus on descent (vertical transmission), while it may be relevant to control for both descent and borrowing (horizontal transmission). In this case, all that is needed is some measure of cultural contact between groups, not necessarily a unified tree of descent. I use a second source of linguistic data to calculate simple distances between languages based directly on the lexicon. These distances reflect borrowing as well as descent.
  • Roberts, S. G., Winters, J., & Chen, K. (2015). Future tense and economic decisions: Controlling for cultural evolution. PLoS One, 10(7): e0132145. doi:10.1371/journal.pone.0132145.

    Abstract

    A previous study by Chen demonstrates a correlation between languages that grammatically mark future events and their speakers' propensity to save, even after controlling for numerous economic and demographic factors. The implication is that languages which grammatically distinguish the present and the future may bias their speakers to distinguish them psychologically, leading to less future-oriented decision making. However, Chen's original analysis assumed languages are independent. This neglects the fact that languages are related, causing correlations to appear stronger than is warranted (Galton's problem). In this paper, we test the robustness of Chen's correlations to corrections for the geographic and historical relatedness of languages. While the question seems simple, the answer is complex. In general, the statistical correlation between the two variables is weaker when controlling for relatedness. When applying the strictest tests for relatedness, and when data is not aggregated across individuals, the correlation is not significant. However, the correlation did remain reasonably robust under a number of tests. We argue that any claims of synchronic patterns between cultural variables should be tested for spurious correlations, with the kinds of approaches used in this paper. However, experiments or case-studies would be more fruitful avenues for future research on this specific topic, rather than further large-scale cross-cultural correlational studies.
  • Roberts, S. G., Torreira, F., & Levinson, S. C. (2015). The effects of processing and sequence organisation on the timing of turn taking: A corpus study. Frontiers in Psychology, 6: 509. doi:10.3389/fpsyg.2015.00509.

    Abstract

    The timing of turn taking in conversation is extremely rapid given the cognitive demands on speakers to comprehend, plan and execute turns in real time. Findings from psycholinguistics predict that the timing of turn taking is influenced by demands on processing, such as word frequency or syntactic complexity. An alternative view comes from the field of conversation analysis, which predicts that the rules of turn-taking and sequence organization may dictate the variation in gap durations (e.g. the functional role of each turn in communication). In this paper, we estimate the role of these two different kinds of factors in determining the speed of turn-taking in conversation. We use the Switchboard corpus of English telephone conversation, already richly annotated for syntactic structure speech act sequences, and segmental alignment. To this we add further information including Floor Transfer Offset (the amount of time between the end of one turn and the beginning of the next), word frequency, concreteness, and surprisal values. We then apply a novel statistical framework ('random forests') to show that these two dimensions are interwoven together with indexical properties of the speakers as explanatory factors determining the speed of response. We conclude that an explanation of the of the timing of turn taking will require insights from both processing and sequence organisation.
  • Rodenas-Cuadrado, P., Chen, X. S., Wiegrebe, L., Firzlaff, U., & Vernes, S. C. (2015). A novel approach identifies the first transcriptome networks in bats: A new genetic model for vocal communication. BMC Genomics, 16: 836. doi:10.1186/s12864-015-2068-1.

    Abstract

    Background Bats are able to employ an astonishingly complex vocal repertoire for navigating their environment and conveying social information. A handful of species also show evidence for vocal learning, an extremely rare ability shared only with humans and few other animals. However, despite their potential for the study of vocal communication, bats remain severely understudied at a molecular level. To address this fundamental gap we performed the first transcriptome profiling and genetic interrogation of molecular networks in the brain of a highly vocal bat species, Phyllostomus discolor. Results Gene network analysis typically needs large sample sizes for correct clustering, this can be prohibitive where samples are limited, such as in this study. To overcome this, we developed a novel bioinformatics methodology for identifying robust co-expression gene networks using few samples (N=6). Using this approach, we identified tissue-specific functional gene networks from the bat PAG, a brain region fundamental for mammalian vocalisation. The most highly connected network identified represented a cluster of genes involved in glutamatergic synaptic transmission. Glutamatergic receptors play a significant role in vocalisation from the PAG, suggesting that this gene network may be mechanistically important for vocal-motor control in mammals. Conclusion We have developed an innovative approach to cluster co-expressing gene networks and show that it is highly effective in detecting robust functional gene networks with limited sample sizes. Moreover, this work represents the first gene network analysis performed in a bat brain and establishes bats as a novel, tractable model system for understanding the genetics of vocal mammalian communication.
  • Rojas-Berscia, L. M. (2015). Mayna, the lost Kawapanan language. LIAMES, 15, 393-407. Retrieved from http://revistas.iel.unicamp.br/index.php/liames/article/view/4549.

    Abstract

    The origins of the Mayna language, formerly spoken in northwest Peruvian Amazonia, remain a mystery for most scholars. Several discussions on it took place in the end of the 19th century and the beginning of the 20th; however, none arrived at a consensus. Apart from an article written by Taylor & Descola (1981), suggesting a relationship with the Jivaroan language family, little to nothing has been said about it for the last half of the 20th century and the last decades. In the present article, a summary of the principal accounts on the language and its people between the 19th and the 20th century will be given, followed by a corpus analysis in which the materials available in Mayna and Kawapanan, mainly prayers collected by Hervás (1787) and Teza (1868), will be analysed and compared for the first time in light of recent analyses in the new-born field called Kawapanan linguistics (Barraza de García 2005a,b; Valenzuela-Bismarck 2011a,b , Valenzuela 2013; Rojas-Berscia 2013, 2014; Madalengoitia-Barúa 2013; Farfán-Reto 2012), in order to test its affiliation to the Kawapanan language family, as claimed by Beuchat & Rivet (1909) and account for its place in the dialectology of this language family.
  • Rojas-Berscia, L. M., & Ghavami Dicker, S. (2015). Teonimia en el Alto Amazonas, el caso de Kanpunama. Escritura y Pensamiento, 18(36), 117-146.
  • Rommers, J., Meyer, A. S., & Huettig, F. (2015). Verbal and nonverbal predictors of language-mediated anticipatory eye movements. Attention, Perception & Psychophysics, 77(3), 720-730. doi:10.3758/s13414-015-0873-x.

    Abstract

    During language comprehension, listeners often anticipate upcoming information. This can draw listeners’ overt attention to visually presented objects before the objects are referred to. We investigated to what extent the anticipatory mechanisms involved in such language-mediated attention rely on specific verbal factors and on processes shared with other domains of cognition. Participants listened to sentences ending in a highly predictable word (e.g., “In 1969 Neil Armstrong was the first man to set foot on the moon”) while viewing displays containing three unrelated distractor objects and a critical object, which was either the target object (e.g., a moon), or an object with a similar shape (e.g., a tomato), or an unrelated control object (e.g., rice). Language-mediated anticipatory eye movements to targets and shape competitors were observed. Importantly, looks to the shape competitor were systematically related to individual differences in anticipatory attention, as indexed by a spatial cueing task: Participants whose responses were most strongly facilitated by predictive arrow cues also showed the strongest effects of predictive language input on their eye movements. By contrast, looks to the target were related to individual differences in vocabulary size and verbal fluency. The results suggest that verbal and nonverbal factors contribute to different types of language-mediated eye movement. The findings are consistent with multiple-mechanism accounts of predictive language processing.
  • Romøren, A. S. H., & Chen, A. (2015). Quiet is the new loud: Pausing and focus in child and adult Dutch. Language and Speech, 58, 8-23. doi:10.1177/0023830914563589.

    Abstract

    In a number of languages, prosody is used to highlight new information (or focus). In Dutch, focus is marked by accentuation, whereby focal constituents are accented and post-focal constituents are de-accented. Even if pausing is not traditionally seen as a cue to focus in Dutch, several previous studies have pointed to a possible relationship between pausing and information structure. Considering that Dutch-speaking 4 to 5 year olds are not yet completely proficient in using accentuation for focus and that children generally pause more than adults, we asked whether pausing might be an available parameter for children to manipulate for focus. Sentences with varying focus structure were elicited from 10 Dutch-speaking 4 to 5 year olds and 9 Dutch-speaking adults by means of a picture-matching game. Comparing pause durations before focal and non-focal targets showed pre-target pauses to be significantly longer when the targets were focal than when they were not. Notably, the use of pausing was more robust in the children than in the adults, suggesting that children exploit pausing to mark focus more generally than adults do, at a stage where their mastery of the canonical cues to focus is still developing. © The Author(s) 2015

    Files private

    Request files
  • Rossi, G. (2009). Il discorso scritto interattivo degli SMS: Uno studio pragmatico del "messaggiare". Rivista Italiana di Dialettologia, 33, 143-193. doi:10.1400/148734.
  • Rossi, G. (2015). Other-initiated repair in Italian. Open Linguistics, 1(1), 256-282. doi:10.1515/opli-2015-0002.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with other-initiated repair, as observed in a corpus of video recorded conversation in the Italian language (Romance). The article reports findings specific to the Italian language from the comparative project that is the topic of this special issue. While giving an overview of all the major practices for other-initiation of repair found in this language, special attention is given to (i) the functional distinctions between different open strategies (interjection, question words, formulaic), and (ii) the role of intonation in discriminating alternative restricted strategies, with a focus on different contour types used to produce repetitions.
  • Rossi, G. (2015). Responding to pre-requests: The organization of hai x ‘do you have x’ sequences in Italian. Journal of Pragmatics, 82, 5-22. doi:10.1016/j.pragma.2015.03.008.

    Abstract

    Among the strategies used by people to request others to do things, there is a particular family defined as pre-requests. The typical function of a pre-request is to check whether some precondition obtains for a request to be successfully made. A form like the Italian interrogative hai x ‘do you have x’, for example, is used to ask if an object is available — a requirement for the object to be transferred or manipulated. But what does it mean exactly to make a pre-request? What difference does it make compared to issuing a request proper? In this article, I address these questions by examining the use of hai x ‘do you have x’ interrogatives in a corpus of informal Italian interaction. Drawing on methods from conversation analysis and linguistics, I show that the status of hai x as a pre-request is reflected in particular properties in the domains of preference and sequence organisation, specifically in the design of blocking responses to the pre-request, and in the use of go-ahead responses, which lead to the expansion of the request sequence. This study contributes to current research on requesting as well as on sequence organisation by demonstrating the response affordances of pre-requests and by furthering our understanding of the processes of sequence expansion.
  • Rowbotham, S., Lloyd, D. M., Holler, J., & Wearden, A. (2015). Externalizing the private experience of pain: A role for co-speech gestures in pain communication? Health Communication, 30(1), 70-80. doi:10.1080/10410236.2013.836070.

    Abstract

    Despite the importance of effective pain communication, talking about pain represents a major challenge for patients and clinicians because pain is a private and subjective experience. Focusing primarily on acute pain, this article considers the limitations of current methods of obtaining information about the sensory characteristics of pain and suggests that spontaneously produced “co-speech hand gestures” may constitute an important source of information here. Although this is a relatively new area of research, we present recent empirical evidence that reveals that co-speech gestures contain important information about pain that can both add to and clarify speech. Following this, we discuss how these findings might eventually lead to a greater understanding of the sensory characteristics of pain, and to improvements in treatment and support for pain sufferers. We hope that this article will stimulate further research and discussion of this previously overlooked dimension of pain communication
  • Rowland, C. F., & Theakston, A. L. (2009). The acquisition of auxiliary syntax: A longitudinal elicitation study. Part 2: The modals and auxiliary DO. Journal of Speech, Language, and Hearing Research, 52, 1471-1492. doi:10.1044/1092-4388(2009/08-0037a).

    Abstract

    Purpose: The study of auxiliary acquisition is central to work on language development and has attracted theoretical work from both nativist and constructivist approaches. This study is part of a 2-part companion set that represents a unique attempt to trace the development of auxiliary syntax by using a longitudinal elicitation methodology. The aim of the research described in this part is to track the development of modal auxiliaries and auxiliary DO in questions and declaratives to provide a more complete picture of the development of the auxiliary system in English-speaking children. Method: Twelve English-speaking children participated in 2 tasks designed to elicit auxiliaries CAN, WILL, and DOES in declaratives and yes/no questions. They completed each task 6 times in total between the ages of 2;10 (years;months) and 3;6. Results: The children’s levels of correct use of the target auxiliaries differed in complex ways according to auxiliary, polarity, and sentence structure, and these relations changed over development. An analysis of the children’s errors also revealed complex interactions between these factors. Conclusions: These data cannot be explained in full by existing theories of auxiliary acquisition. Researchers working within both generativist and constructivist frameworks need to develop more detailed theories of acquisition that predict the pattern of acquisition observed.
  • Rowland, C. F., & Peter, M. (2015). Up to speed? Nursery World Magazine, 15-28 June 2015, 18-20.
  • Rubio-Fernández, P., Wearing, C., & Carston, R. (2015). Metaphor and hyperbole: Testing the continuity hypothesis. Metaphor and Symbol, 30(1), 24-40. doi:10.1080/10926488.2015.980699.

    Abstract

    In standard Relevance Theory, hyperbole and metaphor are categorized together as loose uses of language, on a continuum with approximations, category extensions and other cases of loosening/broadening of meaning. Specifically, it is claimed that there are no interesting differences (in either interpretation or processing) between hyperbolic and metaphorical uses (Sperber & Wilson, 2008). In recent work, we have set out to provide a more fine-grained articulation of the similarities and differences between hyperbolic and metaphorical uses and their relation to literal uses (Carston & Wearing, 2011, 2014). We have defended the view that hyperbolic use involves a shift of magnitude along a dimension which is intrinsic to the encoded meaning of the hyperbole vehicle, while metaphor involves a multi-dimensional qualitative shift away from the encoded meaning of the metaphor vehicle. In this article, we present three experiments designed to test the predictions of this analysis, using a variety of tasks (paraphrase elicitation, self-paced reading and sentence verification). The results of the study support the view that hyperbolic and metaphorical interpretations, despite their commonalities as loose uses of language, are significantly different.
  • De Ruiter, L. E. (2015). Information status marking in spontaneous vs. read speech in story-telling tasks – Evidence from intonation analysis using GToBI. Journal of Phonetics, 48, 29-44. doi:10.1016/j.wocn.2014.10.008.

    Abstract

    Two studies investigated whether speaking mode influences the way German speakers mark the information status of discourse referents in nuclear position. In Study 1, speakers produced narrations spontaneously on the basis of picture stories in which the information status of referents (new, accessible and given) was systematically varied. In Study 2, speakers saw the same pictures, but this time accompanied by text to be read out. Clear differences were found depending on speaking mode: In spontaneous speech, speakers always accented new referents. They did not use different pitch accent types to differentiate between new and accessible referents, nor did they always deaccent given referents. In addition, speakers often made use of low pitch accents in combination with high boundary tones to indicate continuity. In contrast to this, read speech was characterized by low boundary tones, consistent deaccentuation of given referents and the use of H+L* and H+!H* accents, for both new and accessible referents. The results are discussed in terms of the function of intonational features in communication. It is argued that reading intonation is not comparable to intonation in spontaneous speech, and that this has important consequences also for our choice of methodology in child language acquisition research
  • De Ruiter, L. E. (2009). The prosodic marking of topical referents in the German "Vorfeld" by children and adults. The Linguistic Review, 26, 329-354. doi:10.1515/tlir.2009.012.

    Abstract

    This article reports on the analysis of prosodic marking of topical referents in the German prefield by 5- and 7-year-old children and adults. Natural speech data was obtained from a picture-elicited narration task. The data was analyzed both phonologically and phonetically. In line with previous findings, adult speakers realized topical referents predominantly with the accents L+H* and L*+H, but H* accents and unaccented items were also observed. Children used the same accent types as adults, but the accent types were distributed differently. Also, children aligned pitch minima earlier than adults and produced accents with a decreased speed of pitch change. Possible reasons for these findings are discussed. Contrast – defined in terms of a change of subjecthood – did not affect the choice of pitch accent type and did not influence phonetic realization, underlining the fact that accentuation is often a matter of individual speaker choice.

    Files private

    Request files
  • Samur, D., Lai, V. T., Hagoort, P., & Willems, R. M. (2015). Emotional context modulates embodied metaphor comprehension. Neuropsychologia, 78, 108-114. doi:10.1016/j.neuropsychologia.2015.10.003.

    Abstract

    Emotions are often expressed metaphorically, and both emotion and metaphor are ways through which abstract meaning can be grounded in language. Here we investigate specifically whether motion-related verbs when used metaphorically are differentially sensitive to a preceding emotional context, as compared to when they are used in a literal manner. Participants read stories that ended with ambiguous action/motion sentences (e.g., he got it), in which the action/motion could be interpreted metaphorically (he understood the idea) or literally (he caught the ball) depending on the preceding story. Orthogonal to the metaphorical manipulation, the stories were high or low in emotional content. The results showed that emotional context modulated the neural response in visual motion areas to the metaphorical interpretation of the sentences, but not to their literal interpretations. In addition, literal interpretations of the target sentences led to stronger activation in the visual motion areas as compared to metaphorical readings of the sentences. We interpret our results as suggesting that emotional context specifically modulates mental simulation during metaphor processing
  • San Roque, L., & Bergvist, H. (Eds.). (2015). Epistemic marking in typological perspective [Special Issue]. STUF -Language typology and universals, 68(2).
  • San Roque, L. (2015). Using you to get to me: Addressee perspective and speaker stance in Duna evidential marking. STUF: Language typology and universals, 68(2), 187-210. doi:10.1515/stuf-2015-0010.

    Abstract

    Languages have complex and varied means for representing points of view, including constructions that can express multiple perspectives on the same event. This paper presents data on two evidential constructions in the language Duna (Papua New Guinea) that imply features of both speaker and addressee knowledge simultaneously. I discuss how talking about an addressee’s knowledge can occur in contexts of both coercion and co-operation, and, while apparently empathetic, can provide a covert way to both manipulate the addressee’s attention and express speaker stance. I speculate that ultimately, however, these multiple perspective constructions may play a pro-social role in building or repairing the interlocutors’ common ground.
  • San Roque, L., Kendrick, K. H., Norcliffe, E., Brown, P., Defina, R., Dingemanse, M., Dirksmeyer, T., Enfield, N. J., Floyd, S., Hammond, J., Rossi, G., Tufvesson, S., Van Putten, S., & Majid, A. (2015). Vision verbs dominate in conversation across cultures, but the ranking of non-visual verbs varies. Cognitive Linguistics, 26, 31-60. doi:10.1515/cog-2014-0089.

    Abstract

    To what extent does perceptual language reflect universals of experience and cognition, and to what extent is it shaped by particular cultural preoccupations? This paper investigates the universality~relativity of perceptual language by examining the use of basic perception terms in spontaneous conversation across 13 diverse languages and cultures. We analyze the frequency of perception words to test two universalist hypotheses: that sight is always a dominant sense, and that the relative ranking of the senses will be the same across different cultures. We find that references to sight outstrip references to the other senses, suggesting a pan-human preoccupation with visual phenomena. However, the relative frequency of the other senses was found to vary cross-linguistically. Cultural relativity was conspicuous as exemplified by the high ranking of smell in Semai, an Aslian language. Together these results suggest a place for both universal constraints and cultural shaping of the language of perception.
  • Schaefer, M., Haun, D. B., & Tomasello, M. (2015). Fair is not fair everywhere. Psychological Science, 26(8), 1252-1260. doi:10.1177/0956797615586188.

    Abstract

    Distributing the spoils of a joint enterprise on the basis of work contribution or relative productivity seems natural to the modern Western mind. But such notions of merit-based distributive justice may be culturally constructed norms that vary with the social and economic structure of a group. In the present research, we showed that children from three different cultures have very different ideas about distributive justice. Whereas children from a modern Western society distributed the spoils of a joint enterprise precisely in proportion to productivity, children from a gerontocratic pastoralist society in Africa did not take merit into account at all. Children from a partially hunter-gatherer, egalitarian African culture distributed the spoils more equally than did the other two cultures, with merit playing only a limited role. This pattern of results suggests that some basic notions of distributive justice are not universal intuitions of the human species but rather culturally constructed behavioral norms.
  • Scharenborg, O., Weber, A., & Janse, E. (2015). Age and hearing loss and the use of acoustic cues in fricative categorization. The Journal of the Acoustical Society of America, 138(3), 1408-1417. doi:10.1121/1.4927728.

    Abstract

    This study examined the use of fricative noise information and coarticulatory cues for categorization of word-final fricatives [s] and [f] by younger and older Dutch listeners alike. Particularly, the effect of information loss in the higher frequencies on the use of these two cues for fricative categorization was investigated. If information in the higher frequencies is less strongly available, fricative identification may be impaired or listeners may learn to focus more on coarticulatory information. The present study investigates this second possibility. Phonetic categorization results showed that both younger and older Dutch listeners use the primary cue fricative noise and the secondary cue coarticulatory information to distinguish
    word-final [f] from [s]. Individual hearing sensitivity in the older listeners modified the use of fricative noise information, but did not modify the use of coarticulatory information. When high-frequency information was filtered out from the speech signal, fricative noise could no longer be used by the younger and older adults. Crucially, they also did not learn to rely more on coarticulatory information as a compensatory cue for fricative categorization. This suggests that listeners do not readily show compensatory use of this secondary cue to fricative identity when fricative categorization becomes difficult.
  • Scharenborg, O., Weber, A., & Janse, E. (2015). The role of attentional abilities in lexically guided perceptual learning by older listeners. Attention, Perception & Psychophysics, 77(2), 493-507. doi:10.3758/s13414-014-0792-2.

    Abstract

    This study investigates two variables that may modify lexically-guided perceptual learning: individual hearing sensitivity and attentional abilities. Older Dutch listeners (aged 60+, varying from good hearing to mild-to-moderate high-frequency hearing loss) were tested on a lexically-guided perceptual learning task using the contrast [f]-[s]. This contrast mainly differentiates between the two consonants in the higher frequencies, and thus is supposedly challenging for listeners with hearing loss. The analyses showed that older listeners generally engage in lexically-guided perceptual learning. Hearing loss and selective attention did not modify perceptual learning in our participant sample, while attention-switching control did: listeners with poorer attention-switching control showed a stronger perceptual learning effect. We postulate that listeners with better attention-switching control may, in general, rely more strongly on bottom-up acoustic information compared to listeners with poorer attention-switching control, making them in turn less susceptible to lexically-guided perceptual learning effects. Our results, moreover, clearly show that lexically-guided perceptual learning is not lost when acoustic processing is less accurate.
  • Scheeringa, R., Petersson, K. M., Oostenveld, R., Norris, D. G., Hagoort, P., & Bastiaansen, M. C. M. (2009). Trial-by-trial coupling between EEG and BOLD identifies networks related to alpha and theta EEG power increases during working memory maintenance. Neuroimage, 44, 1224-1238. doi:10.1016/j.neuroimage.2008.08.041.

    Abstract

    PET and fMRI experiments have previously shown that several brain regions in the frontal and parietal lobe are involved in working memory maintenance. MEG and EEG experiments have shown parametric increases with load for oscillatory activity in posterior alpha and frontal theta power. In the current study we investigated whether the areas found with fMRI can be associated with these alpha and theta effects by measuring simultaneous EEG and fMRI during a modified Sternberg task This allowed us to correlate EEG at the single trial level with the fMRI BOLD signal by forming a regressor based on single trial alpha and theta
    power estimates. We observed a right posterior, parametric alpha power increase, which was functionally related to decreases in BOLD in the primary visual cortex and in the posterior part of the right middle temporal gyrus. We relate this finding to the inhibition of neuronal activity that may interfere with WM maintenance. An observed parametric increase in frontal theta power was correlated to a decrease in BOLD in
    regions that together form the default mode network. We did not observe correlations between oscillatory EEG phenomena and BOLD in the traditional WM areas. In conclusion, the study shows that simultaneous EEG fMRI recordings can be successfully used to identify the emergence of functional networks in the brain during the execution of a cognitive task.
  • Schiller, N., Horemans, I., Ganushchak, L. Y., & Koester, D. (2009). Event-related brain potentials during monitoring of speech errors. NeuroImage, 44, 520-530. doi:10.1016/j.neuroimage.2008.09.019.

    Abstract

    When we perceive speech, our goal is to extract the meaning of the verbal message which includes semantic processing. However, how deeply do we process speech in different situations? In two experiments, native Dutch participants heard spoken sentences describing simultaneously presented pictures. Sentences either correctly described the pictures or contained an anomalous final word (i.e. a semantically or phonologically incongruent word). In the first experiment, spoken sentences were task-irrelevant and both anomalous conditions elicited similar centro-parietal N400s that were larger in amplitude than the N400 for the correct condition. In the second experiment, we ensured that participants processed the same stimuli semantically. In an early time window, we found similar phonological mismatch negativities for both anomalous conditions compared to the correct condition. These negativities were followed by an N400 that was larger for semantic than phonological errors. Together, these data suggest that we process speech semantically, even if the speech is task-irrelevant. Once listeners allocate more cognitive resources to the processing of speech, we suggest that they make predictions for upcoming words, presumably by means of the production system and an internal monitoring loop, to facilitate lexical processing of the perceived speech
  • Schluessel, V., & Düngen, D. (2015). Irrespective of size, scales, color or body shape, all fish are just fish: object categorization in the gray bamboo shark Chiloscyllium griseum. Animal Cognition, 18, 497-507. doi:10.1007/s10071-014-0818-0.

    Abstract

    Object categorization is an important cognitive adaptation, quickly providing an animal with relevant and potentially life-saving information. It can be defined as the process whereby objects that are not the same, are nonetheless grouped together according to some defining feature(s) and responded to as if they were the same. In this way, knowledge about one object, behavior or situation can be extrapolated onto another without much cost and effort. Many vertebrates including humans, monkeys, birds and teleosts have been shown to be able to categorize, with abilities varying between species and tasks. This study assessed object categorization skills in the gray bamboo shark Chiloscyllium griseum. Sharks learned to distinguish between the two categories, 'fish' versus 'snail' independently of image features and image type, i.e., black and white drawings, photographs, comics or negative images. Transfer tests indicated that sharks predominantly focused on and categorized the positive stimulus, while disregarding the negative stimulus.
  • Schoffelen, J.-M., & Gross, J. (2009). Source connectivity analysis with MEG and EEG. Human Brain Mapping, 30, 1857-1865. doi: 10.1002/hbm.20745.

    Abstract

    Interactions between functionally specialized brain regions are crucial for normal brain function. Magnetoencephalography (MEG) and electroencephalography (EEG) are techniques suited to capture these interactions, because they provide whole head measurements of brain activity in the millisecond range. More than one sensor picks up the activity of an underlying source. This field spread severely limits the utility of connectivity measures computed directly between sensor recordings. Consequentially, neuronal interactions should be studied on the level of the reconstructed sources. This article reviews several methods that have been applied to investigate interactions between brain regions in source space. We will mainly focus on the different measures used to quantify connectivity, and on the different strategies adopted to identify regions of interest. Despite various successful accounts of MEG and EEG source connectivity, caution with respect to the interpretation of the results is still warranted. This is due to the fact that effects of field spread can never be completely abolished in source space. However, in this very exciting and developing field of research this cautionary note should not discourage researchers from further investigation into the connectivity between neuronal sources.
  • Schuerman, W. L., Meyer, A. S., & McQueen, J. M. (2015). Do we perceive others better than ourselves? A perceptual benefit for noise-vocoded speech produced by an average speaker. PLoS One, 10(7): e0129731. doi:10.1371/journal.pone.0129731.

    Abstract

    In different tasks involving action perception, performance has been found to be facilitated
    when the presented stimuli were produced by the participants themselves rather than by
    another participant. These results suggest that the same mental representations are
    accessed during both production and perception. However, with regard to spoken word perception,
    evidence also suggests that listeners’ representations for speech reflect the input
    from their surrounding linguistic community rather than their own idiosyncratic productions.
    Furthermore, speech perception is heavily influenced by indexical cues that may lead listeners
    to frame their interpretations of incoming speech signals with regard to speaker identity.
    In order to determine whether word recognition evinces similar self-advantages as found in
    action perception, it was necessary to eliminate indexical cues from the speech signal. We therefore asked participants to identify noise-vocoded versions of Dutch words that were based on either their own recordings or those of a statistically average speaker. The majority of participants were more accurate for the average speaker than for themselves, even after taking into account differences in intelligibility. These results suggest that the speech
    representations accessed during perception of noise-vocoded speech are more reflective
    of the input of the speech community, and hence that speech perception is not necessarily based on representations of one’s own speech.
  • Schumacher, M., & Skiba, R. (1992). Prädikative und modale Ausdrucksmittel in den Lernervarietäten einer polnischen Migrantin: Eine Longitudinalstudie. Teil I. Linguistische Berichte, 141, 371-400.
  • Schumacher, M., & Skiba, R. (1992). Prädikative und modale Ausdrucksmittel in den Lernervarietäten einer polnischen Migrantin: Eine Longitudinalstudie. Teil II. Linguistische Berichte, 142, 451-475.
  • Schuppler, B., van Doremalen, J., Scharenborg, O., Cranen, B., & Boves, L. (2009). Using temporal information for improving articulatory-acoustic feature classification. Automatic Speech Recognition and Understanding, IEEE 2009 Workshop, 70-75. doi:10.1109/ASRU.2009.5373314.

    Abstract

    This paper combines acoustic features with a high temporal and a high frequency resolution to reliably classify articulatory events of short duration, such as bursts in plosives. SVM classification experiments on TIMIT and SVArticulatory showed that articulatory-acoustic features (AFs) based on a combination of MFCCs derived from a long window of 25ms and a short window of 5ms that are both shifted with 2.5ms steps (Both) outperform standard MFCCs derived with a window of 25 ms and a shift of 10 ms (Baseline). Finally, comparison of the TIMIT and SVArticulatory results showed that for classifiers trained on data that allows for asynchronously changing AFs (SVArticulatory) the improvement from Baseline to Both is larger than for classifiers trained on data where AFs change simultaneously with the phone boundaries (TIMIT).
  • Scott, S. K., McGettigan, C., & Eisner, F. (2009). A little more conversation, a little less action: Candidate roles for motor cortex in speech perception. Nature Reviews Neuroscience, 10(4), 295-302. doi:10.1038/nrn2603.

    Abstract

    The motor theory of speech perception assumes that activation of the motor system is essential in the perception of speech. However, deficits in speech perception and comprehension do not arise from damage that is restricted to the motor cortex, few functional imaging studies reveal activity in motor cortex during speech perception, and the motor cortex is strongly activated by many different sound categories. Here, we evaluate alternative roles for the motor cortex in spoken communication and suggest a specific role in sensorimotor processing in conversation. We argue that motor-cortex activation it is essential in joint speech, particularly for the timing of turn-taking.
  • Scott, L. J., Muglia, P., Kong, X. Q., Guan, W., Flickinger, M., Upmanyu, R., Tozzi, F., Li, J. Z., Burmeister, M., Absher, D., Thompson, R. C., Francks, C., Meng, F., Antoniades, A., Southwick, A. M., Schatzberg, A. F., Bunney, W. E., Barchas, J. D., Jones, E. G., Day, R. and 13 moreScott, L. J., Muglia, P., Kong, X. Q., Guan, W., Flickinger, M., Upmanyu, R., Tozzi, F., Li, J. Z., Burmeister, M., Absher, D., Thompson, R. C., Francks, C., Meng, F., Antoniades, A., Southwick, A. M., Schatzberg, A. F., Bunney, W. E., Barchas, J. D., Jones, E. G., Day, R., Matthews, K., McGuffin, P., Strauss, J. S., Kennedy, J. L., Middleton, L., Roses, A. D., Watson, S. J., Vincent, J. B., Myers, R. M., Farmer, A. E., Akil, H., Burns, D. K., & Boehnke, M. (2009). Genome-wide association and meta-analysis of bipolar disorder in individuals of European ancestry. Proceedings of the National Academy of Sciences of the United States of America, 106(18), 7501-7506. doi:10.1073/pnas.0813386106.

    Abstract

    Bipolar disorder (BP) is a disabling and often life-threatening disorder that affects approximately 1% of the population worldwide. To identify genetic variants that increase the risk of BP, we genotyped on the Illumina HumanHap550 Beadchip 2,076 bipolar cases and 1,676 controls of European ancestry from the National Institute of Mental Health Human Genetics Initiative Repository, and the Prechter Repository and samples collected in London, Toronto, and Dundee. We imputed SNP genotypes and tested for SNP-BP association in each sample and then performed meta-analysis across samples. The strongest association P value for this 2-study meta-analysis was 2.4 x 10(-6). We next imputed SNP genotypes and tested for SNP-BP association based on the publicly available Affymetrix 500K genotype data from the Wellcome Trust Case Control Consortium for 1,868 BP cases and a reference set of 12,831 individuals. A 3-study meta-analysis of 3,683 nonoverlapping cases and 14,507 extended controls on >2.3 M genotyped and imputed SNPs resulted in 3 chromosomal regions with association P approximately 10(-7): 1p31.1 (no known genes), 3p21 (>25 known genes), and 5q15 (MCTP1). The most strongly associated nonsynonymous SNP rs1042779 (OR = 1.19, P = 1.8 x 10(-7)) is in the ITIH1 gene on chromosome 3, with other strongly associated nonsynonymous SNPs in GNL3, NEK4, and ITIH3. Thus, these chromosomal regions harbor genes implicated in cell cycle, neurogenesis, neuroplasticity, and neurosignaling. In addition, we replicated the reported ANK3 association results for SNP rs10994336 in the nonoverlapping GSK sample (OR = 1.37, P = 0.042). Although these results are promising, analysis of additional samples will be required to confirm that variant(s) in these regions influence BP risk.

    Additional information

    Supp_Inform_Scott_et_al.pdf
  • Segaert, K., Nygård, G. E., & Wagemans, J. (2009). Identification of everyday objects on the basis of kinetic contours. Vision Research, 49(4), 417-428. doi:10.1016/j.visres.2008.11.012.

    Abstract

    Using kinetic contours derived from everyday objects, we investigated how motion affects object identification. In order not to be distinguishable when static, kinetic contours were made from random dot displays consisting of two regions, inside and outside the object contour. In Experiment 1, the dots were moving in only one of two regions. The objects were identified nearly equally well as soon as the dots either in the figure or in the background started to move. RTs decreased with increasing motion coherence levels and were shorter for complex, less compact objects than for simple, more compact objects. In Experiment 2, objects could be identified when the dots were moving both in the figure and in the background with speed and direction differences between the two. A linear increase in either the speed difference or the direction difference caused a linear decrease in RT for correct identification. In addition, the combination of speed and motion differences appeared to be super-additive.
  • Seidl, A., Cristia, A., Bernard, A., & Onishi, K. H. (2009). Allophonic and phonemic contrasts in infants' learning of sound patterns. Language Learning and Development, 5, 191-202. doi:10.1080/15475440902754326.

    Abstract

    French-learning 11-month-old and English-learning 11- and 4-month-old infants were familiarized with consonant–vowel–consonant syllables in which the final consonants were dependent on whether the preceding vowel was oral or nasal. Oral and nasal vowels are present in the ambient language of all participants, but vowel nasality is phonemic (contrastive) in French and allophonic (noncontrastive) in English. After familiarization, infants heard novel syllables that either followed or violated the familiarized patterns. French-learning 11-month-olds and English-learning 4-month-olds displayed a reliable pattern of preference demonstrating learning and generalization of the patterns, while English-learning 11-month-olds oriented equally to syllables following and violating the familiarized patterns. The results are consistent with an experience-driven reduction of attention to allophonic contrasts by as early as 11 months, which influences phonotactic learning.
  • Sekine, K. (2009). Changes in frame of reference use across the preschool years: A longitudinal study of the gestures and speech produced during route descriptions. Language and Cognitive Processes, 24(2), 218-238. doi:10.1080/01690960801941327.

    Abstract

    This study longitudinally investigated developmental changes in the frame of reference used by children in their gestures and speech. Fifteen children, between 4 and 6 years of age, were asked once a year to describe their route home from their nursery school. When the children were 4 years old, they tended to produce gestures that directly and continuously indicated their actual route in a large gesture space. In contrast, as 6-year-olds, their gestures were segmented and did not match the actual route. Instead, at age 6, the children seemed to create a virtual space in front of themselves to symbolically describe their route. These results indicate that the use of frames of reference develops across the preschool years, shifting from an actual environmental to an abstract environmental frame of reference. Factors underlying the development of frame of reference, including verbal encoding skills and experience, are discussed.
  • Sekine, K., Stam, G., Yoshioka, K., Tellier, M., & Capirci, O. (2015). Cross-linguistic views of gesture usage. Vigo International Journal of Applied linguistics VIAL, (12), 91-105.

    Abstract

    People have stereotypes about gesture usage. For instance, speakers in East Asia are not supposed to gesticulate, and it is believed that Italians gesticulate more than the British. Despite the prevalence of such views, studies that investigate these stereotypes are scarce. The present study examined peopleÕs views on spontaneous gestures by collecting data from five different countries. A total of 363 undergraduate students from five countries (France, Italy, Japan, the Netherlands and USA) participated in this study. Data were collected through a two-part questionnaire. Part 1 asked participants to rate two characteristics of gesture: frequency and size of gesture for 13 different languages. Part 2 asked them about their views on factors that might affect the production of gestures. The results showed that most participants in this study believe that Italian, Spanish, and American English speakers produce larger gestures more frequently than other language speakers. They also showed that each culture group, even within Europe, put weight on a slightly different aspect of gestures.
  • Sekine, K., & Kita, S. (2015). Development of multimodal discourse comprehension: Cohesive use of space by gestures. Language, Cognition and Neuroscience, 30(10), 1245-1258. doi:10.1080/23273798.2015.1053814.

    Abstract

    This study examined how well 5-, 6-, 10-year-olds and adults integrated information from spoken discourse with cohesive use of space in gesture, in comprehension. In Experiment 1, participants were presented with a combination of spoken discourse and a sequence of cohesive gestures, which consistently located each of the two protagonists in two distinct locations in gesture space. Participants were asked to select an interpretation of the final sentence that best matched the preceding spoken and gestural contexts. Adults and 10-year-olds performed better than 5-year-olds, who were at chance level. In Experiment 2, another group of 5-year-olds was presented with the same stimuli as in Experiment 1, except that the actor showed hand-held pictures, instead of producing cohesive gestures. Unlike cohesive gestures, one set of pictures was self-explanatory and did not require integration with the concurrent speech to derive the referent. With these pictures, 5-year-olds performed nearly perfectly and their performance in the identifiable pictures was significantly better than those in the unidentifiable pictures. These results suggest that young children failed to integrate spoken discourse and cohesive use of space in gestures, because they cannot derive a referent of cohesive gestures from the local speech context.
  • Sekine, K., Snowden, H., & Kita, S. (2015). The development of the ability to semantically integrate information in speech and iconic gesture in comprehension. Cognitive Science. doi:10.1111/cogs.12221.

    Abstract

    We examined whether children's ability to integrate speech and gesture follows the pattern of a broader developmental shift between 3- and 5-year-old children (Ramscar & Gitcho, 2007) regarding the ability to process two pieces of information simultaneously. In Experiment 1, 3-year-olds, 5-year-olds, and adults were presented with either an iconic gesture or a spoken sentence or a combination of the two on a computer screen, and they were instructed to select a photograph that best matched the message. The 3-year-olds did not integrate information in speech and gesture, but 5-year-olds and adults did. In Experiment 2, 3-year-old children were presented with the same speech and gesture as in Experiment 1 that were produced live by an experimenter. When presented live, 3-year-olds could integrate speech and gesture. We concluded that development of the integration ability is a part of the broader developmental shift; however, live-presentation facilitates the nascent integration ability in 3-year-olds.
  • Sekine, K., & Kita, S. (2015). The parallel development of the form and meaning of two-handed gestures and linguistic information packaging within a clause in narrative. Open Linguistics, 1(1), 490-502. doi:10.1515/opli-2015-0015.

    Abstract

    We examined how two-handed gestures and speech with equivalent contents that are used in narrative develop during childhood. The participants were 40 native speakers of English consisting of four different age groups: 3-, 5-, 9-year-olds, and adults. A set of 10 video clips depicting motion events were used to elicit speech and gesture. There are two findings. First, two types of two-handed gestures showed different developmental changes: those with a single-handed stroke with a simultaneous hold increased with age, while those with a two handed-stroke decreased with age. Second, representational gesture and speech developed in parallel at the discourse level. More specifically, the ways in which information is packaged in a gesture and in a clause are similar for a given age group; that is, gesture and speech develop hand-in-hand.
  • Senft, G. (1992). Bakavilisi Biga - or: What happens to English words in the Kilivila Language? Language and Linguistics in Melanesia, 23, 13-49.
  • Senft, G. (1995). Crime and custom auf den Trobriand-Inseln: Der Fall Tokurasi. Anthropos, 90, 17-25.
  • Senft, G. (1985). Emic or etic or just another catch 22? A repartee to Hartmut Haberland. Journal of Pragmatics, 9, 845.
  • Senft, G. (1991). [Review of the book Einführung in die deskriptive Linguistik by Michael Dürr and Peter Schlobinski]. Linguistics, 29, 722-725.
  • Senft, G. (2009). [Review of the book Geschichten und Gesänge von der Insel Nias in Indonesien ed. by Johannes Maria Hämmerle]. Rundbrief - Forum für Mitglieder des Pazifik-Netzwerkes e.V., 78/09, 29-31.
  • Senft, G. (1991). [Review of the book The sign languages of Aboriginal Australia by Adam Kendon]. Journal of Pragmatics, 15, 400-405. doi:10.1016/0378-2166(91)90040-5.
  • Senft, G. (1992). [Review of the book The Yimas language of New Guinea by William A. Foley]. Linguistics, 30, 634-639.
  • Senft, G. (1985). How to tell - and understand - a 'dirty' joke in Kilivila. Journal of Pragmatics, 9, 815-834.

Share this page