Publications

Displaying 101 - 200 of 786
  • Cablitz, G. (2002). The acquisition of an absolute system: learning to talk about space in Marquesan (Oceanic, French Polynesia). In E. V. Clark (Ed.), Space in language location, motion, path, and manner (pp. 40-49). Stanford: Center for the Study of Language & Information (Electronic proceedings.
  • Carlsson, K., Petersson, K. M., Lundqvist, D., Karlsson, A., Ingvar, M., & Öhman, A. (2004). Fear and the amygdala: manipulation of awareness generates differential cerebral responses to phobic and fear-relevant (but nonfeared) stimuli. Emotion, 4(4), 340-353. doi:10.1037/1528-3542.4.4.340.

    Abstract

    Rapid response to danger holds an evolutionary advantage. In this positron emission tomography study, phobics were exposed to masked visual stimuli with timings that either allowed awareness or not of either phobic, fear-relevant (e.g., spiders to snake phobics), or neutral images. When the timing did not permit awareness, the amygdala responded to both phobic and fear-relevant stimuli. With time for more elaborate processing, phobic stimuli resulted in an addition of an affective processing network to the amygdala activity, whereas no activity was found in response to fear-relevant stimuli. Also, right prefrontal areas appeared deactivated, comparing aware phobic and fear-relevant conditions. Thus, a shift from top-down control to an affectively driven system optimized for speed was observed in phobic relative to fear-relevant aware processing.
  • Carota, F., Moseley, R., & Pulvermüller, F. (2012). Body-part-specific Representations of Semantic Noun Categories. Journal of Cognitive Neuroscience, 24(6), 1492-1509. doi:10.1162/jocn\_a\_00219.

    Abstract

    Word meaning processing in the brain involves ventrolateral temporal cortex, but a semantic contribution of the dorsal stream, especially frontocentral sensorimotor areas, has been controversial. We here examine brain activation during passive reading of object-related nouns from different semantic categories, notably animal, food, and tool words, matched for a range of psycholinguistic features. Results show ventral stream activation in temporal cortex along with category-specific activation patterns in both ventral and dorsal streams, including sensorimotor systems and adjacent pFC. Precentral activation reflected action-related semantic features of the word categories. Cortical regions implicated in mouth and face movements were sparked by food words, and hand area activation was seen for tool words, consistent with the actions implicated by the objects the words are used to speak about. Furthermore, tool words specifically activated the right cerebellum, and food words activated the left orbito-frontal and fusiform areas. We discuss our results in the context of category-specific semantic deficits in the processing of words and concepts, along with previous neuroimaging research, and conclude that specific dorsal and ventral areas in frontocentral and temporal cortex index visual and affective–emotional semantic attributes of object-related nouns and action-related affordances of their referent objects.
  • Carroll, M., Lambert, M., Weimar, K., Flecken, M., & von Stutterheim, C. (2012). Tracing trajectories: Motion event construal by advanced L2 French-English and L2 French-German speakers. Language Interaction and Acquisition, 3(2), 202-230. doi:10.1075/lia.3.2.03car.

    Abstract

    Although the typological contrast between Romance and Germanic languages as verb-framed versus satellite-framed (Talmy 1985) forms the background for many empirical studies on L2 acquisition, the inconclusive picture to date calls for more differentiated, fine-grained analyses. The present study goes beyond explanations based on this typological contrast and takes into account the sources from which spatial concepts are mainly derived in order to shape the trajectory traced by the entity in motion when moving through space: the entity in V-languages versus features of the ground in S-languages. It investigates why advanced French learners of English and German have difficulty acquiring the use of spatial concepts typical of the L2s to shape the trajectory, although relevant concepts can be expressed in their L1. The analysis compares motion event descriptions, based on the same sets of video clips, of L1 speakers of the three languages to L1 French-L2 English and L1 French-L2 German speakers, showing that the learners do not fully acquire the use of L2-specific spatial concepts. We argue that encoded concepts derived from the entity in motion vs. the ground lead to a focus on different aspects of motion events, in accordance with their compatibility with these sources, and are difficult to restructure in L2 acquisition.
  • Casasanto, D., & Henetz, T. (2012). Handedness shapes children’s abstract concepts. Cognitive Science, 36, 359-372. doi:10.1111/j.1551-6709.2011.01199.x.

    Abstract

    Can children’s handedness influence how they represent abstract concepts like kindness and intelligence? Here we show that from an early age, right-handers associate rightward space more strongly with positive ideas and leftward space with negative ideas, but the opposite is true for left-handers. In one experiment, children indicated where on a diagram a preferred toy and a dispreferred toy should go. Right-handers tended to assign the preferred toy to a box on the right and the dispreferred toy to a box on the left. Left-handers showed the opposite pattern. In a second experiment, children judged which of two cartoon animals looked smarter (or dumber) or nicer (or meaner). Right-handers attributed more positive qualities to animals on the right, but left-handers to animals on the left. These contrasting associations between space and valence cannot be explained by exposure to language or cultural conventions, which consistently link right with good. Rather, right- and left-handers implicitly associated positive valence more strongly with the side of space on which they can act more fluently with their dominant hands. Results support the body-specificity hypothesis (Casasanto, 2009), showing that children with different kinds of bodies think differently in corresponding ways.
  • Casillas, M., & Frank, M. C. (2012). Cues to turn boundary prediction in adults and preschoolers. In S. Brown-Schmidt, J. Ginzburg, & S. Larsson (Eds.), Proceedings of SemDial 2012 (SeineDial): The 16th Workshop on the Semantics and Pragmatics of Dialogue (pp. 61-69). Paris: Université Paris-Diderot.

    Abstract

    Conversational turns often proceed with very brief pauses between speakers. In order to maintain “no gap, no overlap” turntaking, we must be able to anticipate when an ongoing utterance will end, tracking the current speaker for upcoming points of potential floor exchange. The precise set of cues that listeners use for turn-end boundary anticipation is not yet established. We used an eyetracking paradigm to measure adults’ and children’s online turn processing as they watched videos of conversations in their native language (English) and a range of other languages they did not speak. Both adults and children anticipated speaker transitions effectively. In addition, we observed evidence of turn-boundary anticipation for questions even in languages that were unknown to participants, suggesting that listeners’ success in turn-end anticipation does not rely solely on lexical information.
  • Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain: Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121, 1053-1063. doi:10.1093/brain/121.6.1053.

    Abstract

    Learning a specific skill during childhood may partly determine the functional organization of the adult brain. This hypothesis led us to study oral language processing in illiterate subjects who, for social reasons, had never entered school and had no knowledge of reading or writing. In a brain activation study using PET and statistical parametric mapping, we compared word and pseudoword repetition in literate and illiterate subjects. Our study confirms behavioural evidence of different phonological processing in illiterate subjects. During repetition of real words, the two groups performed similarly and activated similar areas of the brain. In contrast, illiterate subjects had more difficulty repeating pseudowords correctly and did not activate the same neural structures as literates. These results are consistent with the hypothesis that learning the written form of language (orthography) interacts with the function of oral language. Our results indicate that learning to read and write during childhood influences the functional organization of the adult human brain.
  • Catani, M., Dell'Acqua, F., Bizzi, A., Forkel, S. J., Williams, S. C., Simmons, A., Murphy, D. G., & Thiebaut de Schotten, M. (2012). Beyond cortical localization in clinico-anatomical correlation. Cortex, 48(10), 1262-1287. doi:10.1016/j.cortex.2012.07.001.

    Abstract

    Last year was the 150th anniversary of Paul Broca's landmark case report on speech disorder that paved the way for subsequent studies of cortical localization of higher cognitive functions. However, many complex functions rely on the activity of distributed networks rather than single cortical areas. Hence, it is important to understand how brain regions are linked within large-scale networks and to map lesions onto connecting white matter tracts. To facilitate this network approach we provide a synopsis of classical neurological syndromes associated with frontal, parietal, occipital, temporal and limbic lesions. A review of tractography studies in a variety of neuropsychiatric disorders is also included. The synopsis is accompanied by a new atlas of the human white matter connections based on diffusion tensor tractography freely downloadable on http://www.natbrainlab.com. Clinicians can use the maps to accurately identify the tract affected by lesions visible on conventional CT or MRI. The atlas will also assist researchers to interpret their group analysis results. We hope that the synopsis and the atlas by allowing a precise localization of white matter lesions and associated symptoms will facilitate future work on the functional correlates of human neural networks as derived from the study of clinical populations. Our goal is to stimulate clinicians to develop a critical approach to clinico-anatomical correlative studies and broaden their view of clinical anatomy beyond the cortical surface in order to encompass the dysfunction related to connecting pathways.

    Additional information

    supplementary file
  • Chang, F., Janciauskas, M., & Fitz, H. (2012). Language adaptation and learning: Getting explicit about implicit learning. Language and Linguistics Compass, 6, 259-278. doi:10.1002/lnc3.337.

    Abstract

    Linguistic adaptation is a phenomenon where language representations change in response to linguistic input. Adaptation can occur on multiple linguistic levels such as phonology (tuning of phonotactic constraints), words (repetition priming), and syntax (structural priming). The persistent nature of these adaptations suggests that they may be a form of implicit learning and connectionist models have been developed which instantiate this hypothesis. Research on implicit learning, however, has also produced evidence that explicit chunk knowledge is involved in the performance of these tasks. In this review, we examine how these interacting implicit and explicit processes may change our understanding of language learning and processing.
  • Chen, X. S., & Brown, C. M. (2012). Computational identification of new structured cis-regulatory elements in the 3'-untranslated region of human protein coding genes. Nucleic Acids Research, 40, 8862-8873. doi:10.1093/nar/gks684.

    Abstract

    Messenger ribonucleic acids (RNAs) contain a large number of cis-regulatory RNA elements that function in many types of post-transcriptional regulation. These cis-regulatory elements are often characterized by conserved structures and/or sequences. Although some classes are well known, given the wide range of RNA-interacting proteins in eukaryotes, it is likely that many new classes of cis-regulatory elements are yet to be discovered. An approach to this is to use computational methods that have the advantage of analysing genomic data, particularly comparative data on a large scale. In this study, a set of structural discovery algorithms was applied followed by support vector machine (SVM) classification. We trained a new classification model (CisRNA-SVM) on a set of known structured cis-regulatory elements from 3′-untranslated regions (UTRs) and successfully distinguished these and groups of cis-regulatory elements not been strained on from control genomic and shuffled sequences. The new method outperformed previous methods in classification of cis-regulatory RNA elements. This model was then used to predict new elements from cross-species conserved regions of human 3′-UTRs. Clustering of these elements identified new classes of potential cis-regulatory elements. The model, training and testing sets and novel human predictions are available at: http://mRNA.otago.ac.nz/CisRNA-SVM.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2004). Language specificity in perception of paralinguistic intonational meaning. Language and Speech, 47(4), 311-349.

    Abstract

    This study examines the perception of paralinguistic intonational meanings deriving from Ohala’s Frequency Code (Experiment 1) and Gussenhoven’s Effort Code (Experiment 2) in British English and Dutch. Native speakers of British English and Dutch listened to a number of stimuli in their native language and judged each stimulus on four semantic scales deriving from these two codes: SELF-CONFIDENT versus NOT SELF-CONFIDENT, FRIENDLY versus NOT FRIENDLY (Frequency Code); SURPRISED versus NOT SURPRISED, and EMPHATIC versus NOT EMPHATIC (Effort Code). The stimuli, which were lexically equivalent across the two languages, differed in pitch contour, pitch register and pitch span in Experiment 1, and in pitch register, peak height, peak alignment and end pitch in Experiment 2. Contrary to the traditional view that the paralinguistic usage of intonation is similar across languages, it was found that British English and Dutch listeners differed considerably in the perception of “confident,” “friendly,” “emphatic,” and “surprised.” The present findings support a theory of paralinguistic meaning based on the universality of biological codes, which however acknowledges a languagespecific component in the implementation of these codes.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2002). Language-specific uses of the effort code. In B. Bel, & I. Marlien (Eds.), Proceedings of the 1st Conference on Speech Prosody (pp. 215-218). Aix=en-Provence: Université de Provence.

    Abstract

    Two groups of listeners with Dutch and British English language backgrounds judged Dutch and British English utterances, respectively, which varied in the intonation contour on the scales EMPHATIC vs. NOT EMPHATIC and SURPRISED vs. NOT SURPRISED, two meanings derived from the Effort Code. The stimuli, which differed in sentence mode but were otherwise lexically equivalent, were varied in peak height, peak alignment, end pitch, and overall register. In both languages, there are positive correlations between peak height and degree of emphasis, between peak height and degree of surprise, between peak alignment and degree of surprise, and between pitch register and degree of surprise. However, in all these cases, Dutch stimuli lead to larger perceived meaning differences than the British English stimuli. This difference in the extent to which increased pitch height triggers increases in perceived emphasis and surprise is argued to be due to the difference in the standard pitch ranges between Dutch and British English. In addition, we found a positive correlation between pitch register and the degree of emphasis in Dutch, but a negative correlation in British English. This is an unexpected difference, which illustrates a case of ambiguity in the meaning of pitch.
  • Cho, T., & McQueen, J. M. (2004). Phonotactics vs. phonetic cues in native and non-native listening: Dutch and Korean listeners' perception of Dutch and English. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1301-1304). Seoul: Sunjijn Printing Co.

    Abstract

    We investigated how listeners of two unrelated languages, Dutch and Korean, process phonotactically legitimate and illegitimate sounds spoken in Dutch and American English. To Dutch listeners, unreleased word-final stops are phonotactically illegal because word-final stops in Dutch are generally released in isolation, but to Korean listeners, released final stops are illegal because word-final stops are never released in Korean. Two phoneme monitoring experiments showed a phonotactic effect: Dutch listeners detected released stops more rapidly than unreleased stops whereas the reverse was true for Korean listeners. Korean listeners with English stimuli detected released stops more accurately than unreleased stops, however, suggesting that acoustic-phonetic cues associated with released stops improve detection accuracy. We propose that in non-native speech perception, phonotactic legitimacy in the native language speeds up phoneme recognition, the richness of acousticphonetic cues improves listening accuracy, and familiarity with the non-native language modulates the relative influence of these two factors.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, T., Jun, S.-A., & Ladefoged, P. (2002). Acoustic and aerodynamic correlates of Korean stops and fricatives. Journal of Phonetics, 30(2), 193-228. doi:10.1006/jpho.2001.0153.

    Abstract

    This study examines acoustic and aerodynamic characteristics of consonants in standard Korean and in Cheju, an endangered Korean language. The focus is on the well-known three-way distinction among voiceless stops (i.e., lenis, fortis, aspirated) and the two-way distinction between the voiceless fricatives /s/ and /s*/. While such a typologically unusual contrast among voiceless stops has long drawn the attention of phoneticians and phonologists, there is no single work in the literature that discusses a body of data representing a relatively large number of speakers. This study reports a variety of acoustic and aerodynamic measures obtained from 12 Korean speakers (four speakers of Seoul Korean and eight speakers of Cheju). Results show that, in addition to findings similar to those reported by others, there are three crucial points worth noting. Firstly, lenis, fortis, and aspirated stops are systematically differentiated from each other by the voice quality of the following vowel. Secondly, these stops are also differentiated by aerodynamic mechanisms. The aspirated and fortis stops are similar in supralaryngeal articulation, but employ a different relation between intraoral pressure and flow. Thirdly, our study suggests that the fricative /s/ is better categorized as “lenis” rather than “aspirated”. The paper concludes with a discussion of the implications of Korean data for theories of the voicing contrast and their phonological representations.
  • Cho, T., & Johnson, E. K. (2004). Acoustic correlates of phrase-internal lexical boundaries in Dutch. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1297-1300). Seoul: Sunjin Printing Co.

    Abstract

    The aim of this study was to determine if Dutch speakers reliably signal phrase-internal lexical boundaries, and if so, how. Six speakers recorded 4 pairs of phonemically identical strong-weak-strong (SWS) strings with matching syllable boundaries but mismatching intended word boundaries (e.g. reis # pastei versus reispas # tij, or more broadly C1V2(C)#C2V2(C)C3V3(C) vs. C1V2(C)C2V2(C)#C3V3(C)). An Analysis of Variance revealed 3 acoustic parameters that were significantly greater in S#WS items (C2 DURATION, RIME1 DURATION, C3 BURST AMPLITUDE) and 5 parameters that were significantly greater in the SW#S items (C2 VOT, C3 DURATION, RIME2 DURATION, RIME3 DURATION, and V2 AMPLITUDE). Additionally, center of gravity measurements suggested that the [s] to [t] coarticulation was greater in reis # pa[st]ei versus reispa[s] # [t]ij. Finally, a Logistic Regression Analysis revealed that the 3 parameters (RIME1 DURATION, RIME2 DURATION, and C3 DURATION) contributed most reliably to a S#WS versus SW#S classification.
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Chu, M., & Kita, S. (2012). The nature of the beneficial role of spontaneous gesture in spatial problem solving [Abstract]. Cognitive Processing; Special Issue "ICSC 2012, the 5th International Conference on Spatial Cognition: Space and Embodied Cognition". Oral Presentations, 13(Suppl. 1), S39.

    Abstract

    Spontaneous gestures play an important role in spatial problem solving. We investigated the functional role and underlying mechanism of spontaneous gestures in spatial problem solving. In Experiment 1, 132 participants were required to solve a mental rotation task (see Figure 1) without speaking. Participants gestured more frequently in difficult trials than in easy trials. In Experiment 2, 66 new participants were given two identical sets of mental rotation tasks problems, as the one used in experiment 1. Participants who were encouraged to gesture in the first set of mental rotation task problemssolved more problems correctly than those who were allowed to gesture or those who were prohibited from gesturing both in the first set and in the second set in which all participants were prohibited from gesturing. The gestures produced by the gestureencouraged group and the gesture-allowed group were not qualitatively different. In Experiment 3, 32 new participants were first given a set of mental rotation problems and then a second set of nongesturing paper folding problems. The gesture-encouraged group solved more problems correctly in the first set of mental rotation problems and the second set of non-gesturing paper folding problems. We concluded that gesture improves spatial problem solving. Furthermore, gesture has a lasting beneficial effect even when gesture is not available and the beneficial effect is problem-general.We suggested that gesture enhances spatial problem solving by provide a rich sensori-motor representation of the physical world and pick up information that is less readily available to visuo-spatial processes.
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Cohen, E. (2012). [Review of the book Searching for Africa in Brazil: Power and Tradition in Candomblé by Stefania Capone]. Critique of Anthropology, 32, 217-218. doi:10.1177/0308275X12439961.
  • Cohen, E. (2012). The evolution of tag-based cooperation in humans: The case for accent. Current Anthropology, 53, 588-616. doi:10.1086/667654.

    Abstract

    Recent game-theoretic simulation and analytical models have demonstrated that cooperative strategies mediated by indicators of cooperative potential, or “tags,” can invade, spread, and resist invasion by noncooperators across a range of population-structure and cost-benefit scenarios. The plausibility of these models is potentially relevant for human evolutionary accounts insofar as humans possess some phenotypic trait that could serve as a reliable tag. Linguistic markers, such as accent and dialect, have frequently been either cursorily defended or promptly dismissed as satisfying the criteria of a reliable and evolutionarily viable tag. This paper integrates evidence from a range of disciplines to develop and assess the claim that speech accent mediated the evolution of tag-based cooperation in humans. Existing evidence warrants the preliminary conclusion that accent markers meet the demands of an evolutionarily viable tag and potentially afforded a cost-effective solution to the challenges of maintaining viable cooperative relationships in diffuse, regional social networks.
  • Collins, J. (2012). The evolution of the Greenbergian word order correlations. In T. C. Scott-Phillips, M. Tamariz, E. A. Cartmill, & J. R. Hurford (Eds.), The evolution of language. Proceedings of the 9th International Conference (EVOLANG9) (pp. 72-79). Singapore: World Scientific.
  • Colzato, L. S., Zech, H., Hommel, B., Verdonschot, R. G., Van den Wildenberg, W. P. M., & Hsieh, S. (2012). Loving-kindness brings loving-kindness: The impact of Buddhism on cognitive self-other integration. Psychonomic Bulletin & Review, 19(3), 541-545. doi:10.3758/s13423-012-0241-y.

    Abstract

    Common wisdom has it that Buddhism enhances compassion and self-other integration. We put this assumption to empirical test by comparing practicing Taiwanese Buddhists with well-matched atheists. Buddhists showed more evidence of self-other integration in the social Simon task, which assesses the degree to which people co-represent the actions of a coactor. This suggests that self-other integration and task co-representation vary as a function of religious practice.
  • Connell, L., Cai, Z. G., & Holler, J. (2012). Do you see what I'm singing? Visuospatial movement biases pitch perception. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 252-257). Austin, TX: Cognitive Science Society.

    Abstract

    The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.
  • Cooper, N., & Cutler, A. (2004). Perception of non-native phonemes in noise. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 469-472). Seoul: Sunjijn Printing Co.

    Abstract

    We report an investigation of the perception of American English phonemes by Dutch listeners proficient in English. Listeners identified either the consonant or the vowel in most possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (16 dB, 8 dB, and 0 dB). Effects of signal-to-noise ratio on vowel and consonant identification are discussed as a function of syllable position and of relationship to the native phoneme inventory. Comparison of the results with previously reported data from native listeners reveals that noise affected the responding of native and non-native listeners similarly.
  • Cooper, N., Cutler, A., & Wales, R. (2002). Constraints of lexical stress on lexical access in English: Evidence from native and non-native listeners. Language and Speech, 45(3), 207-228.

    Abstract

    Four cross-modal priming experiments and two forced-choice identification experiments investigated the use of suprasegmental cues to stress in the recognition of spoken English words, by native (English-speaking) and non- native (Dutch) listeners. Previous results had indicated that suprasegmental information was exploited in lexical access by Dutch but not by English listeners. For both listener groups, recognition of visually presented target words was faster, in comparison to a control condition, after stress-matching spoken primes, either monosyllabic (mus- from MUsic /muSEum) or bisyl labic (admi- from ADmiral/admiRAtion). For native listeners, the effect of stress-mismatching bisyllabic primes was not different from that of control primes, but mismatching monosyllabic primes produced partial facilitation. For non-native listeners, both bisyllabic and monosyllabic stress-mismatching primes produced partial facilitation. Native English listeners thus can exploit suprasegmental information in spoken-word recognition, but information from two syllables is used more effectively than information from one syllable. Dutch listeners are less proficient at using suprasegmental information in English than in their native language, but, as in their native language, use mono- and bisyllabic information to an equal extent. In forced-choice identification, Dutch listeners outperformed native listeners at correctly assigning a monosyllabic fragment (e.g., mus-) to one of two words differing in stress.
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Cristia, A., & Peperkamp, S. (2012). Generalizing without encoding specifics: Infants infer phonotactic patterns on sound classes. In A. K. Biller, E. Y. Chung, & A. E. Kimball (Eds.), Proceedings of the 36th Annual Boston University Conference on Language Development (BUCLD 36) (pp. 126-138). Somerville, Mass.: Cascadilla Press.

    Abstract

    publication expected April 2012
  • Cristia, A., Seidl, A., Vaughn, C., Schmale, R., Bradlow, A., & Floccia, C. (2012). Linguistic processing of accented speech across the lifespan. Frontiers in Psychology, 3, 479. doi:10.3389/fpsyg.2012.00479.

    Abstract

    In most of the world, people have regular exposure to multiple accents. Therefore, learning to quickly process accented speech is a prerequisite to successful communication. In this paper, we examine work on the perception of accented speech across the lifespan, from early infancy to late adulthood. Unfamiliar accents initially impair linguistic processing by infants, children, younger adults, and older adults, but listeners of all ages come to adapt to accented speech. Emergent research also goes beyond these perceptual abilities, by assessing links with production and the relative contributions of linguistic knowledge and general cognitive skills. We conclude by underlining points of convergence across ages, and the gaps left to face in future work.
  • Cronin, K. A. (2012). Prosocial behaviour in animals: The influence of social relationships, communication and rewards. Animal Behaviour, 84, 1085-1093. doi:10.1016/j.anbehav.2012.08.009.

    Abstract

    Researchers have struggled to obtain a clear account of the evolution of prosocial behaviour despite a great deal of recent effort. The aim of this review is to take a brief step back from addressing the question of evolutionary origins of prosocial behaviour in order to identify contextual factors that are contributing to variation in the expression of prosocial behaviour and hindering progress towards identifying phylogenetic patterns. Most available data come from the Primate Order, and the choice of contextual factors to consider was informed by theory and practice, including the nature of the relationship between the potential donor and recipient, the communicative behaviour of the recipients, and features of the prosocial task including whether rewards are visible and whether the prosocial choice creates an inequity between actors. Conclusions are drawn about the facilitating or inhibiting impact of each of these factors on the expression of prosocial behaviour, and areas for future research are highlighted. Acknowledging the impact of these contextual features on the expression of prosocial behaviours should stimulate new research into the proximate mechanisms that drive these effects, yield experimental designs that better control for potential influences on prosocial expression, and ultimately allow progress towards reconstructing the evolutionary origins of prosocial behaviour.
  • Cronin, K. A., & Sanchez, A. (2012). Social dynamics and cooperation: The case of nonhuman primates and its implications for human behavior. Advances in complex systems, 15, 1250066. doi:10.1142/S021952591250066X.

    Abstract

    The social factors that influence cooperation have remained largely uninvestigated but have the potential to explain much of the variation in cooperative behavior observed in the natural world. We show here that certain dimensions of the social environment, namely the size of the social group, the degree of social tolerance expressed, the structure of the dominance hierarchy, and the patterns of dispersal, may influence the emergence and stability of cooperation in predictable ways. Furthermore, the social environment experienced by a species over evolutionary time will have shaped their cognition to provide certain strengths and strategies that are beneficial in their species‟ social world. These cognitive adaptations will in turn impact the likelihood of cooperating in a given social environment. Experiments with one primate species, the cottontop tamarin, illustrate how social dynamics may influence emergence and stability of cooperative behavior in this species. We then take a more general viewpoint and argue that the hypotheses presented here require further experimental work and the addition of quantitative modeling to obtain a better understanding of how social dynamics influence the emergence and stability of cooperative behavior in complex systems. We conclude by pointing out subsequent specific directions for models and experiments that will allow relevant advances in the understanding of the emergence of cooperation.
  • Cutfield, S. (2012). Foreword. Australian Journal of Linguistics, 32(4), 457-458.
  • Cutler, A., Norris, D., & Sebastián-Gallés, N. (2004). Phonemic repertoire and similarity within the vocabulary. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 65-68). Seoul: Sunjijn Printing Co.

    Abstract

    Language-specific differences in the size and distribution of the phonemic repertoire can have implications for the task facing listeners in recognising spoken words. A language with more phonemes will allow shorter words and reduced embedding of short words within longer ones, decreasing the potential for spurious lexical competitors to be activated by speech signals. We demonstrate that this is the case via comparative analyses of the vocabularies of English and Spanish. A language which uses suprasegmental as well as segmental contrasts, however, can substantially reduce the extent of spurious embedding.
  • Cutler, A., & Otake, T. (2002). Rhythmic categories in spoken-word recognition. Journal of Memory and Language, 46(2), 296-322. doi:10.1006/jmla.2001.2814.

    Abstract

    Rhythmic categories such as morae in Japanese or stress units in English play a role in the perception of spoken
    language. We examined this role in Japanese, since recent evidence suggests that morae may intervene as
    structural units in word recognition. First, we found that traditional puns more often substituted part of a mora
    than a whole mora. Second, when listeners reconstructed distorted words, e.g. panorama from panozema, responses
    were faster and more accurate when only a phoneme was distorted (panozama, panorema) than when a
    whole CV mora was distorted (panozema). Third, lexical decisions on the same nonwords were better predicted
    by duration and number of phonemes from nonword uniqueness point to word end than by number of morae. Our
    results indicate no role for morae in early spoken-word processing; we propose that rhythmic categories constrain
    not initial lexical activation but subsequent processes of speech segmentation and selection among word candidates.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., McQueen, J. M., Jansonius, M., & Bayerl, S. (2002). The lexical statistics of competitor activation in spoken-word recognition. In C. Bow (Ed.), Proceedings of the 9th Australian International Conference on Speech Science and Technology (pp. 40-45). Canberra: Australian Speech Science and Technology Association (ASSTA).

    Abstract

    The Possible Word Constraint is a proposed mechanism whereby listeners avoid recognising words spuriously embedded in other words. It applies to words leaving a vowelless residue between their edge and the nearest known word or syllable boundary. The present study tests the usefulness of this constraint via lexical statistics of both English and Dutch. The analyses demonstrate that the constraint removes a clear majority of embedded words in speech, and thus can contribute significantly to the efficiency of human speech recognition
  • Cutler, A., Demuth, K., & McQueen, J. M. (2002). Universality versus language-specificity in listening to running speech. Psychological Science, 13(3), 258-262. doi:10.1111/1467-9280.00447.

    Abstract

    Recognizing spoken language involves automatic activation of multiple candidate words. The process of selection between candidates is made more efficient by inhibition of embedded words (like egg in beg) that leave a portion of the input stranded (here, b). Results from European languages suggest that this inhibition occurs when consonants are stranded but not when syllables are stranded. The reason why leftover syllables do not lead to inhibition could be that in principle they might themselves be words; in European languages, a syllable can be a word. In Sesotho (a Bantu language), however, a single syllable cannot be a word. We report that in Sesotho, word recognition is inhibited by stranded consonants, but stranded monosyllables produce no more difficulty than stranded bisyllables (which could be Sesotho words). This finding suggests that the viability constraint which inhibits spurious embedded word candidates is not sensitive to language-specific word structure, but is universal.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Cutler, A. (1979). Contemporary reaction to Rudolf Meringer’s speech error research. Historiograpia Linguistica, 6, 57-76.
  • Cutler, A., & Davis, C. (2012). An orthographic effect in phoneme processing, and its limitations. Frontiers in Psychology, 3, 18. doi:10.3389/fpsyg.2012.00018.

    Abstract

    To examine whether lexically stored knowledge about spelling influences phoneme evaluation, we conducted three experiments with a low-level phonetic judgement task: phoneme goodness rating. In each experiment, listeners heard phonetic tokens varying along a continuum centred on /s/, occurring finally in isolated word or nonword tokens. An effect of spelling appeared in Experiment 1: Native English speakers’ goodness ratings for the best /s/ tokens were significantly higher in words spelled with S (e.g., bless) than in words spelled with C (e.g., voice). No such difference appeared when nonnative speakers rated the same materials in Experiment 2, indicating that the difference could not be due to acoustic characteristics of the S- versus C-words. In Experiment 3, nonwords with lexical neighbours consistently spelled with S (e.g., pless) versus with C (e.g., floice) failed to elicit orthographic neighbourhood effects; no significant difference appeared in native English speakers’ ratings for the S-consistent versus the C-consistent sets. Obligatory influence of lexical knowledge on phonemic processing would have predicted such neighbourhood effects; the findings are thus better accommodated by models in which phonemic decisions draw strategically upon lexical information.
  • Cutler, A. (2002). Native listeners. European Review, 10(1), 27-41. doi:10.1017/S1062798702000030.

    Abstract

    Becoming a native listener is the necessary precursor to becoming a native speaker. Babies in the first year of life undertake a remarkable amount of work; by the time they begin to speak, they have perceptually mastered the phonological repertoire and phoneme co-occurrence probabilities of the native language, and they can locate familiar word-forms in novel continuous-speech contexts. The skills acquired at this early stage form a necessary part of adult listening. However, the same native listening skills also underlie problems in listening to a late-acquired non-native language, accounting for why in such a case listening (an innate ability) is sometimes paradoxically more difficult than, for instance, reading (a learned ability).
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A. (2012). Native listening: The flexibility dimension. Dutch Journal of Applied Linguistics, 1(2), 169-187.

    Abstract

    The way we listen to spoken language is tailored to the specific benefit of native-language speech input. Listening to speech in non-native languages can be significantly hindered by this native bias. Is it possible to determine the degree to which a listener is listening in a native-like manner? Promising indications of how this question may be tackled are provided by new research findings concerning the great flexibility that characterises listening to the L1, in online adjustment of phonetic category boundaries for adaptation across talkers, and in modulation of lexical dynamics for adjustment across listening conditions. This flexibility pays off in many dimensions, including listening in noise, adaptation across dialects, and identification of voices. These findings further illuminate the robustness and flexibility of native listening, and potentially point to ways in which we might begin to assess degrees of ‘native-likeness’ in this skill.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A., Otake, T., & Bruggeman, L. (2012). Phonologically determined asymmetries in vocabulary structure across languages. Journal of the Acoustical Society of America, 132(2), EL155-EL160. doi:10.1121/1.4737596.

    Abstract

    Studies of spoken-word recognition have revealed that competition from embedded words differs in strength as a function of where in the carrier word the embedded word is found and have further shown embedding patterns to be skewed such that embeddings in initial position in carriers outnumber embeddings in final position. Lexico-statistical analyses show that this skew is highly attenuated in Japanese, a noninflectional language. Comparison of the extent of the asymmetry in the three Germanic languages English, Dutch, and German allows the source to be traced to a combination of suffixal morphology and vowel reduction in unstressed syllables.
  • Cutler, A. (1996). The comparative study of spoken-language processing. In H. T. Bunnell (Ed.), Proceedings of the Fourth International Conference on Spoken Language Processing: Vol. 1 (pp. 1). New York: Institute of Electrical and Electronics Engineers.

    Abstract

    Psycholinguists are saddled with a paradox. Their aim is to construct a model of human language processing, which will hold equally well for the processing of any language, but this aim cannot be achieved just by doing experiments in any language. They have to compare processing of many languages, and actively search for effects which are specific to a single language, even though a model which is itself specific to a single language is really the last thing they want.
  • Cutler, A., Van Ooijen, B., Norris, D., & Sanchez-Casas, R. (1996). Speeded detection of vowels: A cross-linguistic study. Perception and Psychophysics, 58, 807-822. Retrieved from http://www.psychonomic.org/search/view.cgi?id=430.

    Abstract

    In four experiments, listeners’ response times to detect vowel targets in spoken input were measured. The first three experiments were conducted in English. In two, one using real words and the other, nonwords, detection accuracy was low, targets in initial syllables were detected more slowly than targets in final syllables, and both response time and missed-response rate were inversely correlated with vowel duration. In a third experiment, the speech context for some subjects included all English vowels, while for others, only five relatively distinct vowels occurred. This manipulation had essentially no effect, and the same response pattern was again observed. A fourth experiment, conducted in Spanish, replicated the results in the first three experiments, except that miss rate was here unrelated to vowel duration. We propose that listeners’ responses to vowel targets in naturally spoken input are effectively cautious, reflecting realistic appreciation of vowel variability in natural context.
  • Cutler, A., & Otake, T. (1996). The processing of word prosody in Japanese. In P. McCormack, & A. Russell (Eds.), Proceedings of the 6th Australian International Conference on Speech Science and Technology (pp. 599-604). Canberra: Australian Speech Science and Technology Association.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A., & Fay, D. (1975). You have a Dictionary in your Head, not a Thesaurus. Texas Linguistic Forum, 1, 27-40.
  • Cysouw, M., Dediu, D., & Moran, S. (2012). Comment on “Phonemic Diversity Supports a Serial Founder Effect Model of Language Expansion from Africa”. Science, 335, 657-b. doi:10.1126/science.1208841.

    Abstract

    We show that Atkinson’s (Reports, 15 April 2011, p. 346) intriguing proposal—that global
    linguistic diversity supports a single language origin in Africa—is an artifact of using suboptimal
    data, biased methodology, and unjustified assumptions. We criticize his approach using more
    suitable data, and we additionally provide new results suggesting a more complex scenario for the
    emergence of global linguistic diversity.
  • Dagklis, A., Ponzoni, M., Govi, S., Cangi, M. G., Pasini, E., Charlotte, F., Vino, A., Doglioni, C., Davi, F., Lossos, I. S., Ntountas, I., Papadaki, T., Dolcetti, R., Ferreri, A. J. M., Stamatopoulos, K., & Ghia, P. (2012). Immunoglobulin gene repertoire in ocular adnexal lymphomas: hints on the nature of the antigenic stimulation. Leukemia, 26, 814-821. doi:10.1038/leu.2011.276.

    Abstract

    Evidence from certain geographical areas links lymphomas of the ocular adnexa marginal zone B-cell lymphomas (OAMZL) with Chlamydophila psittaci (Cp) infection, suggesting that lymphoma development is dependent upon chronic stimulation by persistent infections. Notwithstanding that, the actual immunopathogenetical mechanisms have not yet been elucidated. As in other B-cell lymphomas, insight into this issue, especially with regard to potential selecting ligands, could be provided by analysis of the immunoglobulin (IG) receptors of the malignant clones. To this end, we studied the molecular features of IGs in 44 patients with OAMZL (40% Cp-positive), identifying features suggestive of a pathogenic mechanism of autoreactivity. Herein, we show that lymphoma cells express a distinctive IG repertoire, with electropositive antigen (Ag)-binding sites, reminiscent of autoantibodies (auto-Abs) recognizing DNA. Additionally, five (11%) cases of OAMZL expressed IGs homologous with autoreactive Abs or IGs of patients with chronic lymphocytic leukemia, a disease known for the expression of autoreactive IGs by neoplastic cells. In contrast, no similarity with known anti-Chlamydophila Abs was found. Taken together, these results strongly indicate that OAMZL may originate from B cells selected for their capability to bind Ags and, in particular, auto-Ags. In OAMZL associated with Cp infection, the pathogen likely acts indirectly on the malignant B cells, promoting the development of an inflammatory milieu, where auto-Ags could be exposed and presented, driving proliferation and expansion of self-reactive B cells.
  • Dahan, D., & Tanenhaus, M. K. (2004). Continuous mapping from sound to meaning in spoken-language comprehension: Immediate effects of verb-based thematic constraints. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 498-513. doi:10.1037/0278-7393.30.2.498.

    Abstract

    The authors used 2 “visual-world” eye-tracking experiments to examine lexical access using Dutch constructions in which the verb did or did not place semantic constraints on its subsequent subject noun phrase. In Experiment 1, fixations to the picture of a cohort competitor (overlapping with the onset of the referent’s name, the subject) did not differ from fixations to a distractor in the constraining-verb condition. In Experiment 2, cross-splicing introduced phonetic information that temporarily biased the input toward the cohort competitor. Fixations to the cohort competitor temporarily increased in both the neutral and constraining conditions. These results favor models in which mapping from the input onto meaning is continuous over models in which contextual effects follow access of an initial form-based competitor set.
  • Dahan, D., Tanenhaus, M. K., & Chambers, C. G. (2002). Accent and reference resolution in spoken-language comprehension. Journal of Memory and Language, 47(2), 292-314. doi:10.1016/S0749-596X(02)00001-3.

    Abstract

    The role of accent in reference resolution was investigated by monitoring eye fixations to lexical competitors (e.g., candy and candle ) as participants followed prerecorded instructions to move objects above or below fixed geometric shapes using a computer mouse. In Experiment 1, the first utterance instructed participants to move one object above or below a shape (e.g., “Put the candle/candy below the triangle”) and the second utterance contained an accented or deaccented definite noun phrase which referred to the same object or introduced a new entity (e.g., “Now put the CANDLE above the square” vs. “Now put the candle ABOVE THE SQUARE”). Fixations to the competitor (e.g., candy ) demonstrated a bias to interpret deaccented nouns as anaphoric and accented nouns as nonanaphoric. Experiment 2 used only accented nouns in the second instruction, varying whether the referent of this second instruction was the Theme of the first instruction (e.g., “Put the candle below the triangle”) or the Goal of the first instruction (e.g., “Put the necklace below the candle”). Participants preferred to interpret accented noun phrases as referring to a previously mentioned nonfocused entity (the Goal) rather than as introducing a new unmentioned entity.
  • Dalli, A., Tablan, V., Bontcheva, K., Wilks, Y., Broeder, D., Brugman, H., & Wittenburg, P. (2004). Web services architecture for language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC2004) (pp. 365-368). Paris: ELRA - European Language Resources Association.
  • Davidson, D. J., Hanulikova, A., & Indefrey, P. (2012). Electrophysiological correlates of morphosyntactic integration in German phrasal context. Language and Cognitive Processes, 27, 288-311. doi:10.1080/01690965.2011.616448.

    Abstract

    The morphosyntactic paradigm of an inflected word can influence isolated word recognition, but its role in multiple-word phrasal integration is less clear. We examined the electrophysiological response to adjectives in short German prepositional phrases to evaluate whether strong and weak forms of the adjective show a differential response, and whether paradigm variables are related to this response. Twenty native German speakers classified serially presented phrases as grammatically correct or not while the electroencephalogram (EEG) was recorded. A functional mixed effects model of the response to grammatically correct trials revealed a differential response to strong and weak forms of the adjectives. This response difference depended on whether the preceding preposition imposed accusative or dative case. The lexically conditioned information content of the adjectives modulated a later interval of the response. The results indicate that grammatical context modulates the response to morphosyntactic information content, and lends support to the role of paradigm structure in integrative phrasal processing.
  • Dediu, D., & Levinson, S. C. (2012). Abstract profiles of structural stability point to universal tendencies, family-specific factors, and ancient connections between languages. PLoS One, 7(9), e45198. doi:10.1371/journal.pone.0045198.

    Abstract

    Language is the best example of a cultural evolutionary system, able to retain a phylogenetic signal over many thousands of years. The temporal stability (conservatism) of basic vocabulary is relatively well understood, but the stability of the structural properties of language (phonology, morphology, syntax) is still unclear. Here we report an extensive Bayesian phylogenetic investigation of the structural stability of numerous features across many language families and we introduce a novel method for analyzing the relationships between the “stability profiles” of language families. We found that there is a strong universal component across language families, suggesting the existence of universal linguistic, cognitive and genetic constraints. Against this background, however, each language family has a distinct stability profile, and these profiles cluster by geographic area and likely deep genealogical relationships. These stability profiles reveal, for example, the ancient historical relationships between the Siberian and American language families, presumed to be separated by at least 12,000 years. Thus, such higher-level properties of language seen as an evolutionary system might allow the investigation of ancient connections between languages and shed light on the peopling of the world.

    Additional information

    journal.pone.0045198.s001.pdf
  • Dediu, D., & Dingemanse, M. (2012). More than accent: Linguistic and cultural cues in the emergence of tag-based cooperation [Commentary]. Current Anthropology, 53, 606-607. doi:10.1086/667654.

    Abstract

    Commentary on Cohen, E. (2012). The evolution of tag-based cooperation in humans: The case for accent. Current Anthropology, 53, 588-616. doi:10.1086/667654.
  • Defina, R., & Majid, A. (2012). Conceptual event units of putting and taking in two unrelated languages. In N. Miyake, D. Peebles, & R. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 1470-1475). Austin, TX: Cognitive Science Society.

    Abstract

    People automatically chunk ongoing dynamic events into discrete units. This paper investigates whether linguistic structure is a factor in this process. We test the claim that describing an event with a serial verb construction will influence a speaker’s conceptual event structure. The grammar of Avatime (a Kwa language spoken in Ghana)requires its speakers to describe some, but not all, placement events using a serial verb construction which also encodes the preceding taking event. We tested Avatime and English speakers’ recognition memory for putting and taking events. Avatime speakers were more likely to falsely recognize putting and taking events from episodes associated with takeput serial verb constructions than from episodes associated with other constructions. English speakers showed no difference in false recognitions between episode types. This demonstrates that memory for episodes is related to the type of language used; and, moreover, across languages different conceptual representations are formed for the same physical episode, paralleling habitual linguistic practices
  • Demir, Ö. E., So, W.-C., Ozyurek, A., & Goldin-Meadow, S. (2012). Turkish- and English-speaking children display sensitivity to perceptual context in referring expressions they produce in speech and gesture. Language and Cognitive Processes, 27, 844 -867. doi:10.1080/01690965.2011.589273.

    Abstract

    Speakers choose a particular expression based on many factors, including availability of the referent in the perceptual context. We examined whether, when expressing referents, monolingual English- and Turkish-speaking children: (1) are sensitive to perceptual context, (2) express this sensitivity in language-specific ways, and (3) use co-speech gestures to specify referents that are underspecified. We also explored the mechanisms underlying children's sensitivity to perceptual context. Children described short vignettes to an experimenter under two conditions: The characters in the vignettes were present in the perceptual context (perceptual context); the characters were absent (no perceptual context). Children routinely used nouns in the no perceptual context condition, but shifted to pronouns (English-speaking children) or omitted arguments (Turkish-speaking children) in the perceptual context condition. Turkish-speaking children used underspecified referents more frequently than English-speaking children in the perceptual context condition; however, they compensated for the difference by using gesture to specify the forms. Gesture thus gives children learning structurally different languages a way to achieve comparable levels of specification while at the same time adhering to the referential expressions dictated by their language.
  • Den Os, E., & Boves, L. (2002). BabelWeb project develops multilingual guidelines. Multilingual Computing and Technologies, 13(1), 33-36.

    Abstract

    European cooperative effort seeks best practices architecture and procedures for international sites
  • DePape, A., Chen, A., Hall, G., & Trainor, L. (2012). Use of prosody and information structure in high functioning adults with Autism in relation to language ability. Frontiers in Psychology, 3, 72. doi:10.3389/fpsyg.2012.00072.

    Abstract

    Abnormal prosody is a striking feature of the speech of those with Autism Spectrum Disorder (ASD), but previous reports suggest large variability among those with ASD. Here we show that part of this heterogeneity can be explained by level of language functioning. We recorded semi-spontaneous but controlled conversations in adults with and without Autism Spectrum Disorder and measured features related to pitch and duration to determine (1) general use of prosodic features, (2) prosodic use in relation to marking information structure, specifically, the emphasis of new information in a sentence (focus) as opposed to information already given in the conversational context (topic), and (3) the relation between prosodic use and level of language function. We found that, compared to typical adults, those with ASD with high language functioning generally used a larger pitch range than controls but did not mark information structure, whereas those with moderate language functioning generally used a smaller pitch range than controls but marked information structure appropriately to a large extent. Both impaired general prosodic use and impaired marking of information structure would be expected to seriously impact social communication and thereby lead to increased difficulty in personal domains, such as making and keeping friendships, and in professional domains, such as competing for employment opportunities.
  • Diaz, B., Hintz, F., Kiebel, S. J., & von Kriegstein, K. (2012). Dysfunction of the auditory thalamus in developmental dyslexia. Proceedings of the National Academy of Sciences of the United States of America, 109(34), 13841-13846. doi:10.1073/pnas.1119828109.

    Abstract

    Developmental dyslexia, a severe and persistent reading and spelling impairment, is characterized by difficulties in processing speech sounds (i.e., phonemes). Here, we test the hypothesis that these phonological difficulties are associated with a dysfunction of the auditory sensory thalamus, the medial geniculate body (MGB). By using functional MRI, we found that, in dyslexic adults, the MGB responded abnormally when the task required attending to phonemes compared with other speech features. No other structure in the auditory pathway showed distinct functional neural patterns between the two tasks for dyslexic and control participants. Furthermore, MGB activity correlated with dyslexia diagnostic scores, indicating that the task modulation of the MGB is critical for performance in dyslexics. These results suggest that deficits in dyslexia are associated with a failure of the neural mechanism that dynamically tunes MGB according to predictions from cortical areas to optimize speech processing. This view on task-related MGB dysfunction in dyslexics has the potential to reconcile influential theories of dyslexia within a predictive coding framework of brain function.

    Files private

    Request files
  • Díaz, B., Mitterer, H., Broersma, M., & Sebastián-Gallés, N. (2012). Individual differences in late bilinguals' L2 phonological processes: From acoustic-phonetic analysis to lexical access. Learning and Individual Differences, 22, 680-689. doi:10.1016/j.lindif.2012.05.005.

    Abstract

    The extent to which the phonetic system of a second language is mastered varies across individuals. The present study evaluates the pattern of individual differences in late bilinguals across different phonological processes. Fifty-five late Dutch-English bilinguals were tested on their ability to perceive a difficult L2 speech contrast (the English /æ/-/ε/ contrast) in three different tasks: A categorization task, a word identification task and a lexical decision task. As a group, L2 listeners were less accurate than native listeners. However, at the individual level, almost half of the L2 listeners scored within the native range in the categorization task whereas a small percentage scored within the native range in the identification and lexical decision tasks. These results show that L2 listeners' performance crucially depends on the nature of the task, with higher L2 listener accuracy on an acoustic-phonetic analysis task than on tasks involving lexical processes. These findings parallel previous results for early bilinguals, where the pattern of performance was consistent with the processing hierarchy proposed by different models of speech perception. The results indicate that the analysis of patterns of non-native performance can provide important insights concerning the architecture of the speech perception system and the issue of language learnability.
  • Dimitrova, D. V., Stowe, L. A., Redeker, G., & Hoeks, J. C. J. (2012). Less is not more: Neural responses to missing and superfluous accents in context. Journal of Cognitive Neuroscience, 24, 2400-2418. doi:10.1162/jocn_a_00302.

    Abstract

    Prosody, particularly accent, aids comprehension by drawing attention to important elements such as the information that answers a question. A study using ERP registration investigated how the brain deals with the interpretation of prosodic prominence. Sentences were embedded in short dialogues and contained accented elements that were congruous or incongruous with respect to a preceding question. In contrast to previous studies, no explicit prosodic judgment task was added. Robust effects of accentuation were evident in the form of an “accent positivity” (200–500 msec) for accented elements irrespective of their congruity. Our results show that incongruously accented elements, that is, superfluous accents, activate a specific set of neural systems that is inactive in case of incongruously unaccented elements, that is, missing accents. Superfluous accents triggered an early positivity around 100 msec poststimulus, followed by a right-lateralized negative effect (N400). This response suggests that redundant information is identified immediately and leads to the activation of a neural system that is associated with semantic processing (N400). No such effects were found when contextually expected accents were missing. In a later time window, both missing and superfluous accents triggered a late positivity on midline electrodes, presumably related to making sense of both kinds of mismatching stimuli. These results challenge previous findings of greater processing for missing accents and suggest that the natural processing of prosody involves a set of distinct, temporally organized neural systems.
  • Dimroth, C. (2002). Topics, assertions and additive words: How L2 learners get from information structure to target-language syntax. Linguistics, 40(4), 891-923. doi:10.1515/ling.2002.033.

    Abstract

    The article compares the integration of topic-related additive words at different stages of untutored L2 acquisition. Data stem from an ‘‘additive-elicitation task’’ that was designed in order to capture topic-related additive words in a context that is at the same time controlled for the underlying information structure and nondeviant from other kinds of narrative discourse. We relate the distinction between stressed and nonstressed forms of the German scope particles and adverbials auch ‘also’, noch ‘another’, wieder ‘again’, and immer noch ‘still’ to a uniform, information-structure-based principle: the stressed variants have scope over the topic information of the relevant utterances. It is then the common function of these additive words to express the additive link between the topic of the present utterance and some previous topic for which the same state of affairs is claimed to hold. This phenomenon has often been referred to as ‘‘contrastive topic,’’ but contrary to what this term suggests, these topic elements are by no means deviant from the default in coherent discourse. In the underlying information structure, the validity of some given state of affairs for the present topic must be under discussion. Topic-related additive words then express that the state of affairs indeed applies to this topic, their function therefore coming close to the function of assertion marking. While this functional correspondence goes along with the formal organization of the basic stages of untutored second-language acquisition, its expression brings linguistic constraints into conflict when the acquisition of finiteness pushes learners to reorganize their utterances according to target-language syntax.
  • Dimroth, C., & Lasser, I. (Eds.). (2002). Finite options: How L1 and L2 learners cope with the acquisition of finiteness [Special Issue]. Linguistics, 40(4).
  • Dimroth, C., & Lasser, I. (2002). Finite options: How L1 and L2 learners cope with the acquisition of finiteness. Linguistics, 40(4), 647-651. doi:10.1515/ling.2002.027.
  • Dimroth, C., & Klein, W. (1996). Fokuspartikeln in Lernervarietäten: Ein Analyserahmen und einige Beispiele. Zeitschrift für Literaturwissenschaft und Linguistik, 104, 73-114.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dingemanse, M., Hammond, J., Stehouwer, H., Somasundaram, A., & Drude, S. (2012). A high speed transcription interface for annotating primary linguistic data. In Proceedings of 6th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (pp. 7-12). Stroudsburg, PA: Association for Computational Linguistics.

    Abstract

    We present a new transcription mode for the annotation tool ELAN. This mode is designed to speed up the process of creating transcriptions of primary linguistic data (video and/or audio recordings of linguistic behaviour). We survey the basic transcription workflow of some commonly used tools (Transcriber, BlitzScribe, and ELAN) and describe how the new transcription interface improves on these existing implementations. We describe the design of the transcription interface and explore some further possibilities for improvement in the areas of segmentation and computational enrichment of annotations.
  • Dingemanse, M. (2012). Advances in the cross-linguistic study of ideophones. Language and Linguistics Compass, 6, 654-672. doi:10.1002/lnc3.361.

    Abstract

    Ideophones are marked words that depict sensory imagery found in many of the world’s languages. They are noted for their special forms, distinct grammatical behaviour, rich sensory meanings, and interactional uses related to experience and evidentiality. This review surveys recent developments in ideophone research. Work on the semiotics of ideophones helps explain why they are marked and how they realise the depictive potential of speech. A true semantic typology of ideophone systems is coming within reach through a combination of language-internal analyses and language-independent elicitation tools. Documentation of ideophones in a wide variety of genres as well as sequential analysis of ideophone use in natural discourse leads to new insights about their interactional uses and about their relation to other linguistic devices like reported speech and grammatical evidentials. As the study of ideophones is coming of age, it sheds new light on what is possible and probable in human language.
  • Dingemanse, M., & Majid, A. (2012). The semantic structure of sensory vocabulary in an African language. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 300-305). Austin, TX: Cognitive Science Society.

    Abstract

    The widespread occurrence of ideophones, large classes of words specialized in evoking sensory imagery, is little known outside linguistics and anthropology. Ideophones are a common feature in many of the world’s languages but are underdeveloped in English and other Indo-European languages. Here we study the meanings of ideophones in Siwu (a Kwa language from Ghana) using a pile-sorting task. The goal was to uncover the underlying structure of the lexical space and to examine the claimed link between ideophones and perception. We found that Siwu ideophones are principally organized around fine-grained aspects of sensory perception, and map onto salient psychophysical dimensions identified in sensory science. The results ratify ideophones as dedicated sensory vocabulary and underline the relevance of ideophones for research on language and perception.
  • Dolscheid, S., Hunnius, S., Casasanto, D., & Majid, A. (2012). The sound of thickness: Prelinguistic infants' associations of space and pitch. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 306-311). Austin, TX: Cognitive Science Society.

    Abstract

    People often talk about musical pitch in terms of spatial metaphors. In English, for instance, pitches can be high or low, whereas in other languages pitches are described as thick or thin. According to psychophysical studies, metaphors in language can also shape people’s nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place or does it modify preexisting associations? Here we tested 4-month-old Dutch infants’ sensitivity to height-pitch and thickness-pitch mappings in two preferential looking tasks. Dutch infants looked significantly longer at cross-modally congruent stimuli in both experiments, indicating that infants are sensitive to space-pitch associations prior to language. This early presence of space-pitch mappings suggests that these associations do not originate from language. Rather, language may build upon pre-existing mappings and change them gradually via some form of competitive associative learning.
  • Drexler, H., Verbunt, A., & Wittenburg, P. (1996). Max Planck Electronic Information Desk. In B. den Brinker, J. Beek, A. Hollander, & R. Nieuwboer (Eds.), Zesde workshop computers in de psychologie: Programma en uitgebreide samenvattingen (pp. 64-66). Amsterdam: Vrije Universiteit Amsterdam, IFKB.
  • Dronkers, N. F., Wilkins, D. P., Van Valin Jr., R. D., Redfern, B. B., & Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition, 92, 145-177. doi:10.1016/j.cognition.2003.11.002.

    Abstract

    The cortical regions of the brain traditionally associated with the comprehension of language are Wernicke's area and Broca's area. However, recent evidence suggests that other brain regions might also be involved in this complex process. This paper describes the opportunity to evaluate a large number of brain-injured patients to determine which lesioned brain areas might affect language comprehension. Sixty-four chronic left hemisphere stroke patients were evaluated on 11 subtests of the Curtiss–Yamada Comprehensive Language Evaluation – Receptive (CYCLE-R; Curtiss, S., & Yamada, J. (1988). Curtiss–Yamada Comprehensive Language Evaluation. Unpublished test, UCLA). Eight right hemisphere stroke patients and 15 neurologically normal older controls also participated. Patients were required to select a single line drawing from an array of three or four choices that best depicted the content of an auditorily-presented sentence. Patients' lesions obtained from structural neuroimaging were reconstructed onto templates and entered into a voxel-based lesion-symptom mapping (VLSM; Bates, E., Wilson, S., Saygin, A. P., Dick, F., Sereno, M., Knight, R. T., & Dronkers, N. F. (2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5), 448–450.) analysis along with the behavioral data. VLSM is a brain–behavior mapping technique that evaluates the relationships between areas of injury and behavioral performance in all patients on a voxel-by-voxel basis, similar to the analysis of functional neuroimaging data. Results indicated that lesions to five left hemisphere brain regions affected performance on the CYCLE-R, including the posterior middle temporal gyrus and underlying white matter, the anterior superior temporal gyrus, the superior temporal sulcus and angular gyrus, mid-frontal cortex in Brodmann's area 46, and Brodmann's area 47 of the inferior frontal gyrus. Lesions to Broca's and Wernicke's areas were not found to significantly alter language comprehension on this particular measure. Further analysis suggested that the middle temporal gyrus may be more important for comprehension at the word level, while the other regions may play a greater role at the level of the sentence. These results are consistent with those seen in recent functional neuroimaging studies and offer complementary data in the effort to understand the brain areas underlying language comprehension.
  • Drozd, K. F. (1998). No as a determiner in child English: A summary of categorical evidence. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the Gala '97 Conference on Language Acquisition (pp. 34-39). Edinburgh, UK: Edinburgh University Press,.

    Abstract

    This paper summarizes the results of a descriptive syntactic category analysis of child English no which reveals that young children use and represent no as a determiner and negatives like no pen as NPs, contra standard analyses.
  • Drude, S. (2012). [Review of the book O português e o tupi no Brasil by Volker Noll and Wolf Dietrich]. Revista Internacional de Lingüística Iberoamerican, 19, 264-268.
  • Drude, S., Trilsbeek, P., & Broeder, D. (2012). Language Documentation and Digital Humanities: The (DoBeS) Language Archive. In J. C. Meister (Ed.), Digital Humanities 2012 Conference Abstracts. University of Hamburg, Germany; July 16–22, 2012 (pp. 169-173).

    Abstract

    Overview Since the early nineties, the on-going dramatic loss of the world’s linguistic diversity has gained attention, first by the linguists and increasingly also by the general public. As a response, the new field of language documentation emerged from around 2000 on, starting with the funding initiative ‘Dokumentation Bedrohter Sprachen’ (DoBeS, funded by the Volkswagen foundation, Germany), soon to be followed by others such as the ‘Endangered Languages Documentation Programme’ (ELDP, at SOAS, London), or, in the USA, ‘Electronic Meta-structure for Endangered Languages Documentation’ (EMELD, led by the LinguistList) and ‘Documenting Endangered Languages’ (DEL, by the NSF). From its very beginning, the new field focused on digital technologies not only for recording in audio and video, but also for annotation, lexical databases, corpus building and archiving, among others. This development not just coincides but is intrinsically interconnected with the increasing focus on digital data, technology and methods in all sciences, in particular in the humanities.
  • Drude, S., Broeder, D., Trilsbeek, P., & Wittenburg, P. (2012). The Language Archive: A new hub for language resources. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 3264-3267). European Language Resources Association (ELRA).

    Abstract

    This contribution presents “The Language Archive” (TLA), a new unit at the MPI for Psycholinguistics, discussing the current developments in management of scientific data, considering the need for new data research infrastructures. Although several initiatives worldwide in the realm of language resources aim at the integration, preservation and mobilization of research data, the state of such scientific data is still often problematic. Data are often not well organized and archived and not described by metadata ― even unique data such as field-work observational data on endangered languages is still mostly on perishable carriers. New data centres are needed that provide trusted, quality-reviewed, persistent services and suitable tools and that take legal and ethical issues seriously. The CLARIN initiative has established criteria for suitable centres. TLA is in a good position to be one of such centres. It is based on three essential pillars: (1) A data archive; (2) management, access and annotation tools; (3) archiving and software expertise for collaborative projects. The archive hosts mostly observational data on small languages worldwide and language acquisition data, but also data resulting from experiments
  • Dunbar, R., Baron, R., Frangou, A., Peirce, E., Van Leeuwen, E. J. C., Stow, J., Partridge, G., MacDonald, I., Barra, V., & Van Vugt, M. (2012). Social laughter is correlated with an elevated pain threshold. Proceedings of the Royal Society of London/B, 279, 1161-1167. doi:10.1098/rspb.2011.1373.

    Abstract

    Although laughter forms an important part of human non-verbal communication, it has received rather less attention than it deserves in both the experimental and the observational literatures. Relaxed social (Duchenne) laughter is associated with feelings of wellbeing and heightened affect, a proximate expla- nation for which might be the release of endorphins. We tested this hypothesis in a series of six experimental studies in both the laboratory (watching videos) and naturalistic contexts (watching stage performances), using change in pain threshold as an assay for endorphin release. The results show that pain thresholds are significantly higher after laughter than in the control condition. This pain-tolerance effect is due to laughter itself and not simply due to a change in positive affect. We suggest that laughter, through an endorphin-mediated opiate effect, may play a crucial role in social bonding.

    Additional information

    Dunbar_et_al_suppl_material.doc
  • Dunn, M., Reesink, G., & Terrill, A. (2002). The East Papuan languages: A preliminary typological appraisal. Oceanic Linguistics, 41(1), 28-62.

    Abstract

    This paper examines the Papuan languages of Island Melanesia, with a view to considering their typological similarities and differences. The East Papuan languages are thought to be the descendants of the languages spoken by the original inhabitants of Island Melanesia, who arrived in the area up to 50,000 years ago. The Oceanic Austronesian languages are thought to have come into the area with the Lapita peoples 3,500 years ago. With this historical backdrop in view, our paper seeks to investigate the linguistic relationships between the scattered Papuan languages of Island Melanesia. To do this, we survey various structural features, including syntactic patterns such as constituent order in clauses and noun phrases and other features of clause structure, paradigmatic structures of pronouns, and the structure of verbal morphology. In particular, we seek to discern similarities between the languages that might call for closer investigation, with a view to establishing genetic relatedness between some or all of the languages. In addition, in examining structural relationships between languages, we aim to discover whether it is possible to distinguish between original Papuan elements and diffused Austronesian elements of these languages. As this is a vast task, our paper aims merely to lay the groundwork for investigation into these and related questions.
  • Dunn, M., & Terrill, A. (2012). Assessing the evidence for a Central Solomons Papuan family using the Oswalt Monte Carlo Test. Diachronica, 29(1), 1-27. doi:10.1075/dia.29.1.01dun.

    Abstract

    In the absence of comparative method reconstruction, high rate of lexical cognate candidates is often used as evidence for relationships between languages. This paper uses the Oswalt Monte Carlo Shift test (a variant of Oswalt 1970) to explore the statistical basis of the claim that the four Papuan languages of the Solomon Islands have greater than chance levels of lexical similarity. The results of this test initially appear to show that the lexical similarities between the Central Solomons Papuan languages are statistically significant, but the effect disappears when known Oceanic loanwords are removed. The Oswalt Monte Carlo test is a useful technique to test a claim of greater than chance similarity between any two word lists — with the proviso that undetected loanwords strongly increase the chance of spurious identification.
  • Dunn, M. (2012). [Review of the book "The Dene-Yeniseian connection", ed. by J. Kari and B.A. Potter]. Language, 88(2), 429-431. doi:10.1353/lan.2012.0036.

    Abstract

    A review of "Anthropological papers of the University of Alaska: The Dene-Yeniseian connection." Ed. by JAMES KARI and BEN A. POTTER Fairbanks, AK: University of Alaska, Fairbanks. 2010. Pp. vi, 363.
  • Eerland, A., Guadalupe, T., & Zwaan, R. A. (2012). Posture as index for approach-avoidance behavior. PLoS One, 7(2), e31291. doi:10.1371/journal.pone.0031291.

    Abstract

    Approach and avoidance are two behavioral responses that make people tend to approach positive and avoid negative situations. This study examines whether postural behavior is influenced by the affective state of pictures. While standing on the Wii™ Balance Board, participants viewed pleasant, neutral, and unpleasant pictures (passively viewing phase). Then they had to move their body to the left or the right (lateral movement phase) to make the next picture appear. We recorded movements in the anterior-posterior direction to examine approach and avoidant behavior. During passively viewing, people approached pleasant pictures. They avoided unpleasant ones while they made a lateral movement. These findings provide support for the idea that we tend to approach positive and avoid negative situations.
  • Eisner, F. (2012). Competition in the acoustic encoding of emotional speech. In L. McCrohon (Ed.), Five approaches to language evolution. Proceedings of the workshops of the 9th International Conference on the Evolution of Language (pp. 43-44). Tokyo: Evolang9 Organizing Committee.

    Abstract

    1. Introduction Speech conveys not only linguistic meaning but also paralinguistic information, such as features of the speaker’s social background, physiology, and emotional state. Linguistic and paralinguistic information is encoded in speech by using largely the same vocal apparatus and both are transmitted simultaneously in the acoustic signal, drawing on a limited set of acoustic cues. How this simultaneous encoding is achieved, how the different types of information are disentangled by the listener, and how much they interfere with one another is presently not well understood. Previous research has highlighted the importance of acoustic source and filter cues for emotion and linguistic encoding respectively, which may suggest that the two types of information are encoded independently of each other. However, those lines of investigation have been almost completely disconnected (Murray & Arnott, 1993).
  • Elbers, W., Broeder, D., & Van Uytvanck, D. (2012). Proper language resource centers. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 3260-3263). European Language Resources Association (ELRA).

    Abstract

    Language resource centers allow researchers to reliably deposit their structured data together with associated meta data and run services operating on this deposited data. We are looking into possibilities to create long-term persistency of both the deposited data and the services operating on this data. Challenges, both technical and non-technical, that need to be solved are the need to replicate more than just the data, proper identification of the digital objects in a distributed environment by making use of persistent identifiers and the set-up of a proper authentication and authorization domain including the management of the authorization information on the digital objects. We acknowledge the investment that most language resource centers have made in their current infrastructure. Therefore one of the most important requirements is the loose coupling with existing infrastructures without the need to make many changes. This shift from a single language resource center into a federated environment of many language resource centers is discussed in the context of a real world center: The Language Archive supported by the Max Planck Institute for Psycholinguistics.
  • Ellis-Davies, K., Sakkalou, E., Fowler, N., Hilbrink, E., & Gattis, M. (2012). CUE: The continuous unified electronic diary method. Behavior Research Methods, 44, 1063-1078. doi:10.3758/s13428-012-0205-1.

    Abstract

    In the present article, we introduce the continuous unified electronic (CUE) diary method, a longitudinal, event-based, electronic parent report method that allows real-time recording of infant and child behavior in natural contexts. Thirty-nine expectant mothers were trained to identify and record target behaviors into programmed handheld computers. From birth to 18 months, maternal reporters recorded the initial, second, and third occurrences of seven target motor behaviors: palmar grasp, rolls from side to back, reaching when sitting, pincer grip, crawling, walking, and climbing stairs. Compliance was assessed as two valid entries per behavior: 97 % of maternal reporters met compliance criteria. Reliability was assessed by comparing diary entries with researcher assessments for three of the motor behaviors: palmar grasp, pincer grip and walking. A total of 81 % of maternal reporters met reliability criteria. For those three target behaviors, age of emergence was compared across data from the CUE diary method and researcher assessments. The CUE diary method was found to detect behaviors earlier and with greater sensitivity to individual differences. The CUE diary method is shown to be a reliable methodological tool for studying processes of change in human development.
  • Enard, W., Przeworski, M., Fisher, S. E., Lai, C. S. L., Wiebe, V., Kitano, T., Pääbo, S., & Monaco, A. P. (2002). Molecular evolution of FOXP2, a gene involved in speech and language [Letters to Nature]. Nature, 418, 869-872. doi:10.1038/nature01025.

    Abstract

    Language is a uniquely human trait likely to have been a prerequisite for the development of human culture. The ability to develop articulate speech relies on capabilities, such as fine control of the larynx and mouth, that are absent in chimpanzees and other great apes. FOXP2 is the first gene relevant to the human ability to develop language. A point mutation in FOXP2 co-segregates with a disorder in a family in which half of the members have severe articulation difficulties accompanied by linguistic and grammatical impairment. This gene is disrupted by translocation in an unrelated individual who has a similar disorder. Thus, two functional copies of FOXP2 seem to be required for acquisition of normal spoken language. We sequenced the complementary DNAs that encode the FOXP2 protein in the chimpanzee, gorilla, orang-utan, rhesus macaque and mouse, and compared them with the human cDNA. We also investigated intraspecific variation of the human FOXP2 gene. Here we show that human FOXP2 contains changes in amino-acid coding and a pattern of nucleotide polymorphism, which strongly suggest that this gene has been the target of selection during recent human evolution.
  • Enfield, N. J. (2002). Semantic analysis of body parts in emotion terminology: Avoiding the exoticisms of 'obstinate monosemy' and 'online extension'. Pragmatics and Cognition, 10(1), 85-106. doi:10.1075/pc.10.12.05enf.

    Abstract

    Investigation of the emotions entails reference to words and expressions conventionally used for the description of emotion experience. Important methodological issues arise for emotion researchers, and the issues are of similarly central concern in linguistic semantics more generally. I argue that superficial and/or inconsistent description of linguistic meaning can have seriously misleading results. This paper is firstly a critique of standards in emotion research for its tendency to underrate and ill-understand linguistic semantics. It is secondly a critique of standards in some approaches to linguistic semantics itself. Two major problems occur. The first is failure to distinguish between conceptually distinct meanings of single words, neglecting the well-established fact that a single phonological string can signify more than one conceptual category (i.e., that words can be polysemous). The second error involves failure to distinguish between two kinds of secondary uses of words: (1) those which are truly active “online” extensions, and (2) those which are conventionalised secondary meanings and not active (qua “extensions”) at all. These semantic considerations are crucial to conclusions one may draw about cognition and conceptualisation based on linguistic evidence.
  • Enfield, N. J. (2002). Parallel innovation and 'coincidence' in linguistic areas: On a bi-clausal extent/result constructions of mainland Southeast Asia. In P. Chew (Ed.), Proceedings of the 28th meeting of the Berkeley Linguistics Society. Special session on Tibeto-Burman and Southeast Asian linguistics (pp. 121-128). Berkeley: Berkeley Linguistics Society.
  • Enfield, N. J. (2004). On linear segmentation and combinatorics in co-speech gesture: A symmetry-dominance construction in Lao fish trap descriptions. Semiotica, 149(1/4), 57-123. doi:10.1515/semi.2004.038.
  • Enfield, N. J. (2012). Diversity disregarded [Review of the book Games primates play: An undercover investigation of the evolution and economics of human relationships by Dario Maestripieri]. Science, 337, 1295-1296. doi:10.1126/science.1225365.
  • Enfield, N. J., & Sidnell, J. (2012). Collateral effects, agency, and systems of language use [Reply to commentators]. Current Anthropology, 53(3), 327-329.
  • Enfield, N. J. (2012). [Review of the book "Language, culture, and mind: Natural constructions and social kinds", by Paul Kockelman]. Language in Society, 41(5), 674-677. doi:10.1017/S004740451200070X.
  • Enfield, N. J. (2004). Areal grammaticalisation of postverbal 'acquire' in mainland Southeast Asia. In S. Burusphat (Ed.), Proceedings of the 11th Southeast Asia Linguistics Society Meeting (pp. 275-296). Arizona State University: Tempe.

Share this page