Publications

Displaying 101 - 200 of 1005
  • Brown, P., & Levinson, S. C. (1998). Politeness, introduction to the reissue: A review of recent work. In A. Kasher (Ed.), Pragmatics: Vol. 6 Grammar, psychology and sociology (pp. 488-554). London: Routledge.

    Abstract

    This article is a reprint of chapter 1, the introduction to Brown and Levinson, 1987, Politeness: Some universals in language usage (Cambridge University Press).
  • Brown, P., & Levinson, S. C. (2009). Politeness: Some universals in language usage [chapter 1, reprint]. In N. Coupland, & A. Jaworski (Eds.), Sociolinguistics: critical concepts [volume III: Interactional sociolinguistics] (pp. 311-323). London: Routledge.
  • Brucato, N., Cassar, O., Tonasso, L., Guitard, E., Migot-Nabias, F., Tortevoye, P., Plancoulaine, S., Larrouy, G., Gessain, A., & Dugoujon, J.-M. (2009). Genetic diversity and dynamics of the Noir Marron settlement in French Guyana: A study combining mitochondrial DNA, Y chromosome and HTLV-1 genotyping [Abstract]. AIDS Research and Human Retroviruses, 25(11), 1258. doi:10.1089/aid.2009.9992.

    Abstract

    The Noir Marron are the direct descendants of thousands of African slaves deported to the Guyanas during the Atlantic Slave Trade and later escaped mainly from Dutch colonial plantations. Six ethnic groups are officially recognized, four of which are located in French Guyana: the Aluku, the Ndjuka, the Saramaka, and the Paramaka. The aim of this study was: (1) to determine the Noir Marron settlement through genetic exchanges with other communities such as Amerindians and Europeans; (2) to retrace their origins in Africa. Buffy-coat DNA from 142 Noir Marron, currently living in French Guyana, were analyzed using mtDNA (typing of SNP coding regions and sequencing of HVSI/II) and Y chromosomes (typing STR and SNPs) to define their genetic profile. Results were compared to an African database composed by published data, updated with genotypes of 82 Fon from Benin, and 128 Ahizi and 63 Yacouba from the Ivory-Coast obtained in this study for the same markers. Furthermore, the determination of the genomic subtype of HTLV-1 strains (env gp21 and LTR regions), which can be used as a marker of migration of infected populations, was performed for samples from 23 HTLV-1 infected Noir Marron and compared with the corresponding database. MtDNA profiles showed a high haplotype diversity, in which 99% of samples belonged to the major haplogroup L, frequent in Africa. Each haplotype was largely represented on the West African coast, but notably higher homologies were obtained with the samples present in the Gulf of Guinea. Y Chromosome analysis revealed the same pattern, i.e. a conservation of the African contribution to the Noir Marron genetic profile, with 98% of haplotypes belonging to the major haplogroup E1b1a, frequent in West Africa. The genetic diversity was higher than those observed in African populations, proving the large Noir Marron’s fatherland, but a predominant identity in the Gulf of Guinea can be suggested. Concerning HTLV-1 genotyping, all the Noir Marron strains belonged to the large Cosmopolitan A subtype. However, among them 17/23 (74%) clustered with the West African clade comprizing samples originating from Ivory-Coast, Ghana, Burkina-Fasso and Senegal, while 3 others clustered in the Trans-Sahelian clade and the remaining 3 were similar to strains found in individuals in South America. Through the combined analyses of three approaches, we have provided a conclusive image of the genetic profile of the Noir Marron communities studied. The high degree of preservation of the African gene pool contradicts the expected gene flow that would correspond to the major cultural exchanges observed between Noir Marron, Europeans and Amerindians. Marital practices and historical events could explain these observations. Corresponding to historical and cultural data, the origin of the ethnic groups is widely dispatched throughout West Africa. However, all results converge to suggest an individualization from a major birthplace in the Gulf of Guinea.
  • Brucato, N., Tortevoye, P., Plancoulaine, S., Guitard, E., Sanchez-Mazas, A., Larrouy, G., Gessain, A., & Dugoujon, J.-M. (2009). The genetic diversity of three peculiar populations descending from the slave trade: Gm study of Noir Marron from French Guiana. Comptes Rendus Biologies, 332(10), 917-926. doi:10.1016/j.crvi.2009.07.005.

    Abstract

    The Noir Marron communities are the direct descendants of African slaves brought to the Guianas during the four centuries (16th to 19th) of the Atlantic slave trade. Among them, three major ethnic groups have been studied: the Aluku, the Ndjuka and the Saramaka. Their history led them to share close relationships with Europeans and Amerindians, as largely documented in their cultural records. The study of Gm polymorphisms of immunoglobulins may help to estimate the amount of gene flow linked to these cultural exchanges. Surprisingly, very low levels of European contribution (2.6%) and Amerindian contribution (1.7%) are detected in the Noir Marron gene pool. On the other hand, an African contribution of 95.7% redraws their origin to West Africa (FSTless-than-or-equals, slant0.15). This highly preserved African gene pool of the Noir Marron is unique in comparison to other African American populations of Latin America, who are notably more admixed

    Additional information

    Table 4
  • Brugman, H. (2004). ELAN 2.2 now available. Language Archive Newsletter, 1(3), 13-14.
  • Brugman, H., Sloetjes, H., Russel, A., & Klassmann, A. (2004). ELAN 2.3 available. Language Archive Newsletter, 1(4), 13-13.
  • Brugman, H. (2004). ELAN Releases 2.0.2 and 2.1. Language Archive Newsletter, 1(2), 4-4.
  • Brugman, H., Crasborn, O., & Russel, A. (2004). Collaborative annotation of sign language data with Peer-to-Peer technology. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 213-216). Paris: European Language Resources Association.
  • Brugman, H., Malaisé, V., & Gazendam, L. (2006). A web based general thesaurus browser to support indexing of television and radio programs. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 1488-1491).
  • Brugman, H., & Russel, A. (2004). Annotating Multi-media/Multi-modal resources with ELAN. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 2065-2068). Paris: European Language Resources Association.
  • Budwig, N., Narasimhan, B., & Srivastava, S. (2006). Interim solutions: The acquisition of early constructions in Hindi. In E. Clark, & B. Kelly (Eds.), Constructions in acquisition (pp. 163-185). Stanford: CSLI Publications.
  • Burenhult, N. (2004). Spatial deixis in Jahai. In S. Burusphat (Ed.), Papers from the 11th Annual Meeting of the Southeast Asian Linguistics Society 2001 (pp. 87-100). Arizona State University: Program for Southeast Asian Studies.
  • Burenhult, N. (2009). [Commentary on M. Meschiari, 'Roots of the savage mind: Apophenia and imagination as cognitive process']. Quaderni di semantica, 30(2), 239-242. doi:10.1400/127893.
  • Burenhult, N. (2006). Body part terms in Jahai. Language Sciences, 28(2-3), 162-180. doi:10.1016/j.langsci.2005.11.002.

    Abstract

    This article explores the lexicon of body part terms in Jahai, a Mon-Khmer language spoken by a group of hunter–gatherers in the Malay Peninsula. It provides an extensive inventory of body part terms and describes their structural and semantic properties. The Jahai body part lexicon pays attention to fine anatomical detail but lacks labels for major, ‘higher-level’ categories, like ‘trunk’, ‘limb’, ‘arm’ and ‘leg’. In this lexicon it is therefore sometimes difficult to discern a clear partonomic hierarchy, a presumed universal of body part terminology.
  • Burenhult, N. (2004). Landscape terms and toponyms in Jahai: A field report. Lund Working Papers, 51, 17-29.
  • Burenhult, N., & Levinson, S. C. (2009). Semplates: A guide to identification and elicitation. In A. Majid (Ed.), Field manual volume 12 (pp. 44-50). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883556.

    Abstract

    Semplates are a new descriptive and theoretical concept in lexical semantics, borne out of recent L&C work in several domains. A semplate can be defined as a configuration consisting of distinct layers of lexemes, each layer drawn from a different form class, mapped onto the same abstract semantic template. Within such a lexical layer, the sense relations between the lexical items are inherited from the underlying template. Thus, the whole set of lexical layers and the underlying template form a cross-categorial configuration in the lexicon. The goal of this task is to find new kinds of macrostructure in the lexicon, with a view to cross-linguistic comparison.
  • Burenhult, N., & Wegener, C. (2009). Preliminary notes on the phonology, orthography and vocabulary of Semnam (Austroasiatic, Malay Peninsula). Journal of the Southeast Asian Linguistics Society, 1, 283-312. Retrieved from http://www.jseals.org/.

    Abstract

    This paper reports tentatively some features of Semnam, a Central Aslian language spoken by some 250 people in the Perak valley, Peninsular Malaysia. It outlines the unusually rich phonemic system of this hitherto undescribed language (e.g. a vowel system comprising 36 distinctive nuclei), and proposes a practical orthography for it. It also includes the c. 1,250- item wordlist on which the analysis is based, collected intermittently in the field 2006-2008.
  • Burnham, D., Ambikairajah, E., Arciuli, J., Bennamoun, M., Best, C. T., Bird, S., Butcher, A. R., Cassidy, S., Chetty, G., Cox, F. M., Cutler, A., Dale, R., Epps, J. R., Fletcher, J. M., Goecke, R., Grayden, D. B., Hajek, J. T., Ingram, J. C., Ishihara, S., Kemp, N. and 10 moreBurnham, D., Ambikairajah, E., Arciuli, J., Bennamoun, M., Best, C. T., Bird, S., Butcher, A. R., Cassidy, S., Chetty, G., Cox, F. M., Cutler, A., Dale, R., Epps, J. R., Fletcher, J. M., Goecke, R., Grayden, D. B., Hajek, J. T., Ingram, J. C., Ishihara, S., Kemp, N., Kinoshita, Y., Kuratate, T., Lewis, T. W., Loakes, D. E., Onslow, M., Powers, D. M., Rose, P., Togneri, R., Tran, D., & Wagner, M. (2009). A blueprint for a comprehensive Australian English auditory-visual speech corpus. In M. Haugh, K. Burridge, J. Mulder, & P. Peters (Eds.), Selected proceedings of the 2008 HCSNet Workshop on Designing the Australian National Corpus (pp. 96-107). Somerville, MA: Cascadilla Proceedings Project.

    Abstract

    Large auditory-visual (AV) speech corpora are the grist of modern research in speech science, but no such corpus exists for Australian English. This is unfortunate, for speech science is the brains behind speech technology and applications such as text-to-speech (TTS) synthesis, automatic speech recognition (ASR), speaker recognition and forensic identification, talking heads, and hearing prostheses. Advances in these research areas in Australia require a large corpus of Australian English. Here the authors describe a blueprint for building the Big Australian Speech Corpus (the Big ASC), a corpus of over 1,100 speakers from urban and rural Australia, including speakers of non-indigenous, indigenous, ethnocultural, and disordered forms of Australian English, each of whom would be sampled on three occasions in a range of speech tasks designed by the researchers who would be using the corpus.
  • Campisi, E. (2009). La gestualità co-verbale tra comunicazione e cognizione: In che senso i gesti sono intenzionali. In F. Parisi, & M. Primo (Eds.), Natura, comunicazione, neurofilosofie. Atti del III convegno 2009 del CODISCO. Rome: Squilibri.
  • Carlsson, K., Andersson, J., Petrovic, P., Petersson, K. M., Öhman, A., & Ingvar, M. (2006). Predictability modulates the affective and sensory-discriminative neural processing of pain. NeuroImage, 32(4), 1804-1814. doi:10.1016/j.neuroimage.2006.05.027.

    Abstract

    Knowing what is going to happen next, that is, the capacity to predict upcoming events, modulates the extent to which aversive stimuli induce stress and anxiety. We explored this issue by manipulating the temporal predictability of aversive events by means of a visual cue, which was either correlated or uncorrelated with pain stimuli (electric shocks). Subjects reported lower levels of anxiety, negative valence and pain intensity when shocks were predictable. In addition to attenuate focus on danger, predictability allows for correct temporal estimation of, and selective attention to, the sensory input. With functional magnetic resonance imaging, we found that predictability was related to enhanced activity in relevant sensory-discriminative processing areas, such as the primary and secondary sensory cortex and posterior insula. In contrast, the unpredictable more aversive context was correlated to brain activity in the anterior insula and the orbitofrontal cortex, areas associated with affective pain processing. This context also prompted increased activity in the posterior parietal cortex and lateral prefrontal cortex that we attribute to enhanced alertness and sustained attention during unpredictability.
  • Carlsson, K., Petersson, K. M., Lundqvist, D., Karlsson, A., Ingvar, M., & Öhman, A. (2004). Fear and the amygdala: manipulation of awareness generates differential cerebral responses to phobic and fear-relevant (but nonfeared) stimuli. Emotion, 4(4), 340-353. doi:10.1037/1528-3542.4.4.340.

    Abstract

    Rapid response to danger holds an evolutionary advantage. In this positron emission tomography study, phobics were exposed to masked visual stimuli with timings that either allowed awareness or not of either phobic, fear-relevant (e.g., spiders to snake phobics), or neutral images. When the timing did not permit awareness, the amygdala responded to both phobic and fear-relevant stimuli. With time for more elaborate processing, phobic stimuli resulted in an addition of an affective processing network to the amygdala activity, whereas no activity was found in response to fear-relevant stimuli. Also, right prefrontal areas appeared deactivated, comparing aware phobic and fear-relevant conditions. Thus, a shift from top-down control to an affectively driven system optimized for speed was observed in phobic relative to fear-relevant aware processing.
  • Carota, F. (2006). Derivational morphology of Italian: Principles for formalization. Literary and Linguistic Computing, 21(SUPPL. 1), 41-53. doi:10.1093/llc/fql007.

    Abstract

    The present paper investigates the major derivational strategies underlying the formation of suffixed words in Italian, with the purpose of tackling the issue of their formalization. After having specified the theoretical cognitive premises that orient the work, the interacting component modules of the suffixation process, i.e. morphonology, morphotactics and affixal semantics, are explored empirically, by drawing ample naturally occurring data on a Corpus of written Italian. A special attention is paid to the semantic mechanisms that are involved into suffixation. Some semantic nuclei are identified for the major suffixed word types of Italian, which are due to word formation rules active at the synchronic level, and a semantic configuration of productive suffixes is suggested. A general framework is then sketched, which combines classical finite-state methods with a feature unification-based word grammar. More specifically, the semantic information specified for the affixal material is internalised into the structures of the Lexical Functional Grammar (LFG). The formal model allows us to integrate the various modules of suffixation. In particular, it treats, on the one hand, the interface between morphonology/morphotactics and semantics and, on the other hand, the interface between suffixation and inflection. Furthermore, since LFG exploits a hierarchically organised lexicon in order to structure the information regarding the affixal material, affixal co-selectional restrictions are advatageously constrained, avoiding potential multiple spurious analysis/generations.
  • Casasanto, D., Willems, R. M., & Hagoort, P. (2009). Body-specific representations of action verbs: Evidence from fMRI in right- and left-handers. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (pp. 875-880). Austin: Cognitive Science Society.

    Abstract

    According to theories of embodied cognition, understanding a verb like throw involves unconsciously simulating the action throwing, using areas of the brain that support motor planning. If understanding action words involves mentally simulating our own actions, then the neurocognitive representation of word meanings should differ for people with different kinds of bodies, who perform actions in systematically different ways. In a test of the body-specificity hypothesis (Casasanto, 2009), we used fMRI to compare premotor activity correlated with action verb understanding in right- and left-handers. Right-handers preferentially activated left premotor cortex during lexical decision on manual action verbs (compared with non-manual action verbs), whereas left-handers preferentially activated right premotor areas. This finding helps refine theories of embodied semantics, suggesting that implicit mental simulation during language processing is body-specific: Right and left-handers, who perform actions differently, use correspondingly different areas of the brain for representing action verb meanings.
  • Casasanto, D. (2009). Embodiment of abstract concepts: Good and bad in right- and left-handers. Journal of Experimental Psychology: General, 138, 351-367. doi:10.1037/a0015854.

    Abstract

    Do people with different kinds of bodies think differently? According to the body-specificity hypothesis, people who interact with their physical environments in systematically different ways should form correspondingly different mental representations. In a test of this hypothesis, 5 experiments investigated links between handedness and the mental representation of abstract concepts with positive or negative valence (e.g., honesty, sadness, intelligence). Mappings from spatial location to emotional valence differed between right- and left-handed participants. Right-handers tended to associate rightward space with positive ideas and leftward space with negative ideas, but left-handers showed the opposite pattern, associating rightward space with negative ideas and leftward with positive ideas. These contrasting mental metaphors for valence cannot be attributed to linguistic experience, because idioms in English associate good with right but not with left. Rather, right- and left-handers implicitly associated positive valence more strongly with the side of space on which they could act more fluently with their dominant hands. These results support the body-specificity hypothesis and provide evidence for the perceptuomotor basis of even the most abstract ideas.
  • Casasanto, D., & Jasmin, K. (2009). Emotional valence is body-specific: Evidence from spontaneous gestures during US presidential debates. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (pp. 1965-1970). Austin: Cognitive Science Society.

    Abstract

    What is the relationship between motor action and emotion? Here we investigated whether people associate good things more strongly with the dominant side of their bodies, and bad things with the non-dominant side. To find out, we analyzed spontaneous gestures during speech expressing ideas with positive or negative emotional valence (e.g., freedom, pain, compassion). Samples of speech and gesture were drawn from the 2004 and 2008 US presidential debates, which involved two left-handers (Obama, McCain) and two right-handers (Kerry, Bush). Results showed a strong association between the valence of spoken clauses and the hands used to make spontaneous co-speech gestures. In right-handed candidates, right-hand gestures were more strongly associated with positive-valence clauses, and left-hand gestures with negative-valence clauses. Left-handed candidates showed the opposite pattern. Right- and left-handers implicitly associated positive valence more strongly with their dominant hand: the hand they can use more fluently. These results support the body-specificity hypothesis, (Casasanto, 2009), and suggest a perceptuomotor basis for even our most abstract ideas.
  • Casasanto, D. (2009). [Review of the book Music, language, and the brain by Aniruddh D. Patel]. Language and Cognition, 1(1), 143-146. doi:10.1515/LANGCOG.2009.007.
  • Casasanto, D., Fotakopoulou, O., & Boroditsky, L. (2009). Space and time in the child's mind: Evidence for a cross-dimensional asymmetry. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (pp. 1090-1095). Austin: Cognitive Science Society.

    Abstract

    What is the relationship between space and time in the human mind? Studies in adults show an asymmetric relationship between mental representations of these basic dimensions of experience: representations of time depend on space more than representations of space depend on time. Here we investigated the relationship between space and time in the developing mind. Native Greek-speaking children (N=99) watched movies of two animals traveling along parallel paths for different distances or durations and judged the spatial and temporal aspects of these events (e.g., Which animal went for a longer time, or a longer distance?) Results showed a reliable cross-dimensional asymmetry: for the same stimuli, spatial information influenced temporal judgments more than temporal information influenced spatial judgments. This pattern was robust to variations in the age of the participants and the type of language used to elicit responses. This finding demonstrates a continuity between space-time representations in children and adults, and informs theories of analog magnitude representation.
  • Casasanto, D. (2009). Space for thinking. In V. Evans, & P. Chilton (Eds.), Language, cognition and space: State of the art and new directions (pp. 453-478). London: Equinox Publishing.
  • Casasanto, D. (2009). When is a linguistic metaphor a conceptual metaphor? In V. Evans, & S. Pourcel (Eds.), New directions in cognitive linguistics (pp. 127-145). Amsterdam: Benjamins.
  • Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain: Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121, 1053-1063. doi:10.1093/brain/121.6.1053.

    Abstract

    Learning a specific skill during childhood may partly determine the functional organization of the adult brain. This hypothesis led us to study oral language processing in illiterate subjects who, for social reasons, had never entered school and had no knowledge of reading or writing. In a brain activation study using PET and statistical parametric mapping, we compared word and pseudoword repetition in literate and illiterate subjects. Our study confirms behavioural evidence of different phonological processing in illiterate subjects. During repetition of real words, the two groups performed similarly and activated similar areas of the brain. In contrast, illiterate subjects had more difficulty repeating pseudowords correctly and did not activate the same neural structures as literates. These results are consistent with the hypothesis that learning the written form of language (orthography) interacts with the function of oral language. Our results indicate that learning to read and write during childhood influences the functional organization of the adult human brain.
  • Cavaco, P., Curuklu, B., & Petersson, K. M. (2009). Artificial grammar recognition using two spiking neural networks. Frontiers in Neuroinformatics. Conference abstracts: 2nd INCF Congress of Neuroinformatics. doi:10.3389/conf.neuro.11.2009.08.096.

    Abstract

    In this paper we explore the feasibility of artificial (formal) grammar recognition (AGR) using spiking neural networks. A biologically inspired minicolumn architecture is designed as the basic computational unit. A network topography is defined based on the minicolumn architecture, here referred to as nodes, connected with excitatory and inhibitory connections. Nodes in the network represent unique internal states of the grammar’s finite state machine (FSM). Future work to improve the performance of the networks is discussed. The modeling framework developed can be used by neurophysiological research to implement network layouts and compare simulated performance characteristics to actual subject performance.
  • Chen, J. (2006). The acquisition of verb compounding in Mandarin. In E. V. Clark, & B. F. Kelly (Eds.), Constructions in acquisition (pp. 111-136). Stanford: CSLI Publications.
  • Chen, Y., & Braun, B. (2006). Prosodic realization in information structure categories in standard Chinese. In R. Hoffmann, & H. Mixdorff (Eds.), Speech Prosody 2006. Dresden: TUD Press.

    Abstract

    This paper investigates the prosodic realization of information
    structure categories in Standard Chinese. A number of proper
    names with different tonal combinations were elicited as a
    grammatical subject in five pragmatic contexts. Results show
    that both duration and F0 range of the tonal realizations were
    adjusted to signal the information structure categories (i.e.
    theme vs. rheme and background vs. focus). Rhemes
    consistently induced a longer duration and a more expanded F0
    range than themes. Focus, compared to background, generally
    induced lengthening and F0 range expansion (the presence and
    magnitude of which, however, are dependent on the tonal
    structure of the proper names). Within the rheme focus
    condition, corrective rheme focus induced more expanded F0
    range than normal rheme focus.
  • Chen, A. (2006). Variations in the marking of focus in child language. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 113-114).
  • Chen, A. (2006). Interface between information structure and intonation in Dutch wh-questions. In R. Hoffmann, & H. Mixdorff (Eds.), Speech Prosody 2006. Dresden: TUD Press.

    Abstract

    This study set out to investigate how accent placement is pragmatically governed in WH-questions. Central to this issue are questions such as whether the intonation of the WH-word depends on the information structure of the non-WH word part, whether topical constituents can be accented, and whether constituents in the non-WH word part can be non-topical and accented. Previous approaches, based either on carefully composed examples or on read speech, differ in their treatments of these questions and consequently make opposing claims on the intonation of WH-questions. We addressed these questions by examining a corpus of 90 naturally occurring WH-questions, selected from the Spoken Dutch Corpus. Results show that the intonation of the WH-word is related to the information structure of the non-WH word part. Further, topical constituents can get accented and the accents are not necessarily phonetically reduced. Additionally, certain adverbs, which have no topical relation to the presupposition of the WH-questions, also get accented. They appear to function as a device for enhancing speaker engagement.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2004). Language specificity in perception of paralinguistic intonational meaning. Language and Speech, 47(4), 311-349.

    Abstract

    This study examines the perception of paralinguistic intonational meanings deriving from Ohala’s Frequency Code (Experiment 1) and Gussenhoven’s Effort Code (Experiment 2) in British English and Dutch. Native speakers of British English and Dutch listened to a number of stimuli in their native language and judged each stimulus on four semantic scales deriving from these two codes: SELF-CONFIDENT versus NOT SELF-CONFIDENT, FRIENDLY versus NOT FRIENDLY (Frequency Code); SURPRISED versus NOT SURPRISED, and EMPHATIC versus NOT EMPHATIC (Effort Code). The stimuli, which were lexically equivalent across the two languages, differed in pitch contour, pitch register and pitch span in Experiment 1, and in pitch register, peak height, peak alignment and end pitch in Experiment 2. Contrary to the traditional view that the paralinguistic usage of intonation is similar across languages, it was found that British English and Dutch listeners differed considerably in the perception of “confident,” “friendly,” “emphatic,” and “surprised.” The present findings support a theory of paralinguistic meaning based on the universality of biological codes, which however acknowledges a languagespecific component in the implementation of these codes.
  • Chen, X. S., Collins, L. J., Biggs, P. J., & Penny, D. (2009). High throughput genome-wide survey of small RNAs from the parasitic protists giardia intestinalis and trichomonas vaginalis. Genome biology and evolution, 1, 165-175. doi:10.1093/gbe/evp017.

    Abstract

    RNA interference (RNAi) is a set of mechanisms which regulate gene expression in eukaryotes. Key elements of RNAi are small sense and antisense RNAs from 19 to 26 nucleotides generated from double-stranded RNAs. miRNAs are a major type of RNAi-associated small RNAs and are found in most eukaryotes studied to date. To investigate whether small RNAs associated with RNAi appear to be present in all eukaryotic lineages, and therefore present in the ancestral eukaryote, we studied two deep-branching protozoan parasites, Giardia intestinalis and Trichomonas vaginalis. Little is known about endogenous small RNAs involved in RNAi of these organisms. Using Illumina Solexa sequencing and genome-wide analysis of small RNAs from these distantly related deep-branching eukaryotes, we identified 10 strong miRNA candidates from Giardia and 11 from Trichomonas. We also found evidence of Giardia siRNAs potentially involved in the expression of variant-specific-surface proteins. In addition, 8 new snoRNAs from Trichomonas are identified. Our results indicate that miRNAs are likely to be general in ancestral eukaryotes, and therefore are likely to be a universal feature of eukaryotes.
  • Chen, A. (2009). Intonation and reference maintenance in Turkish learners of Dutch: A first insight. AILE - Acquisition et Interaction en Langue Etrangère, 28(2), 67-91.

    Abstract

    This paper investigates L2 learners’ use of intonation in reference maintenance in comparison to native speakers at three longitudinal points. Nominal referring expressions were elicited from two untutored Turkish learners of Dutch and five native speakers of Dutch via a film retelling task, and were analysed in terms of pitch span and word duration. Effects of two types of change in information states were examined, between new and given and between new and accessible. We found native-like use of word duration in both types of change early on but different performances between learners and development over time in one learner in the use of pitch span. Further, the use of morphosyntactic devices had different effects on the two learners. The inter-learner differences and late systematic use of pitch span, in spite of similar use of pitch span in learners’ L1 and L2, suggest that learning may play a role in the acquisition of intonation as a device for reference maintenance.
  • Chen, A. (2009). Perception of paralinguistic intonational meaning in a second language. Language Learning, 59(2), 367-409.
  • Chen, A. (2009). The phonetics of sentence-initial topic and focus in adult and child Dutch. In M. Vigário, S. Frota, & M. Freitas (Eds.), Phonetics and Phonology: Interactions and interrelations (pp. 91-106). Amsterdam: Benjamins.
  • Cho, T., & McQueen, J. M. (2006). Phonological versus phonetic cues in native and non-native listening: Korean and Dutch listeners' perception of Dutch and English consonants. Journal of the Acoustical Society of America, 119(5), 3085-3096. doi:10.1121/1.2188917.

    Abstract

    We investigated how listeners of two unrelated languages, Korean and Dutch, process phonologically viable and nonviable consonants spoken in Dutch and American English. To Korean listeners, released final stops are nonviable because word-final stops in Korean are never released in words spoken in isolation, but to Dutch listeners, unreleased word-final stops are nonviable because word-final stops in Dutch are generally released in words spoken in isolation. Two phoneme monitoring experiments showed a phonological effect on both Dutch and English stimuli: Korean listeners detected the unreleased stops more rapidly whereas Dutch listeners detected the released stops more rapidly and/or more accurately. The Koreans, however, detected released stops more accurately than unreleased stops, but only in the non-native language they were familiar with (English). The results suggest that, in non-native speech perception, phonological legitimacy in the native language can be more important than the richness of phonetic information, though familiarity with phonetic detail in the non-native language can also improve listening performance.
  • Cho, T., & McQueen, J. M. (2004). Phonotactics vs. phonetic cues in native and non-native listening: Dutch and Korean listeners' perception of Dutch and English. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1301-1304). Seoul: Sunjijn Printing Co.

    Abstract

    We investigated how listeners of two unrelated languages, Dutch and Korean, process phonotactically legitimate and illegitimate sounds spoken in Dutch and American English. To Dutch listeners, unreleased word-final stops are phonotactically illegal because word-final stops in Dutch are generally released in isolation, but to Korean listeners, released final stops are illegal because word-final stops are never released in Korean. Two phoneme monitoring experiments showed a phonotactic effect: Dutch listeners detected released stops more rapidly than unreleased stops whereas the reverse was true for Korean listeners. Korean listeners with English stimuli detected released stops more accurately than unreleased stops, however, suggesting that acoustic-phonetic cues associated with released stops improve detection accuracy. We propose that in non-native speech perception, phonotactic legitimacy in the native language speeds up phoneme recognition, the richness of acousticphonetic cues improves listening accuracy, and familiarity with the non-native language modulates the relative influence of these two factors.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, T., & Johnson, E. K. (2004). Acoustic correlates of phrase-internal lexical boundaries in Dutch. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1297-1300). Seoul: Sunjin Printing Co.

    Abstract

    The aim of this study was to determine if Dutch speakers reliably signal phrase-internal lexical boundaries, and if so, how. Six speakers recorded 4 pairs of phonemically identical strong-weak-strong (SWS) strings with matching syllable boundaries but mismatching intended word boundaries (e.g. reis # pastei versus reispas # tij, or more broadly C1V2(C)#C2V2(C)C3V3(C) vs. C1V2(C)C2V2(C)#C3V3(C)). An Analysis of Variance revealed 3 acoustic parameters that were significantly greater in S#WS items (C2 DURATION, RIME1 DURATION, C3 BURST AMPLITUDE) and 5 parameters that were significantly greater in the SW#S items (C2 VOT, C3 DURATION, RIME2 DURATION, RIME3 DURATION, and V2 AMPLITUDE). Additionally, center of gravity measurements suggested that the [s] to [t] coarticulation was greater in reis # pa[st]ei versus reispa[s] # [t]ij. Finally, a Logistic Regression Analysis revealed that the 3 parameters (RIME1 DURATION, RIME2 DURATION, and C3 DURATION) contributed most reliably to a S#WS versus SW#S classification.
  • Cholin, J. (2004). Syllables in speech production: Effects of syllable preparation and syllable frequency. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.60589.

    Abstract

    The fluent production of speech is a very complex human skill. It requires the coordination of several articulatory subsystems. The instructions that lead articulatory movements to execution are the result of the interplay of speech production levels that operate above the articulatory network. During the process of word-form encoding, the groundwork for the articulatory programs is prepared which then serve the articulators as basic units. This thesis investigated whether or not syllables form the basis for the articulatory programs and in particular whether or not these syllable programs are stored, separate from the store of the lexical word-forms. It is assumed that syllable units are stored in a so-called 'mental syllabary'. The main goal of this thesis was to find evidence of the syllable playing a functionally important role in speech production and for the assumption that syllables are stored units. In a variant of the implicit priming paradigm, it was investigated whether information about the syllabic structure of a target word facilitates the preparation (advanced planning) of a to-be-produced utterance. These experiments yielded evidence for the functionally important role of syllables in speech production. In a subsequent row of experiments, it could be demonstrated that the production of syllables is sensitive to frequency. Syllable frequency effects provide strong evidence for the notion of a mental syllabary because only stored units are likely to exhibit frequency effects. In a last study, effects of syllable preparation and syllable frequency were investigated in a combined study to disentangle the two effects. The results of this last experiment converged with those reported for the other experiments and added further support to the claim that syllables play a core functional role in speech production and are stored in a mental syllabary.

    Additional information

    full text via Radboud Repository
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Cholin, J., & Levelt, W. J. M. (2009). Effects of syllable preparation and syllable frequency in speech production: Further evidence for syllabic units at a post-lexical level. Language and Cognitive Processes, 24, 662-684. doi:10.1080/01690960802348852.

    Abstract

    In the current paper, we asked at what level in the speech planning process speakers retrieve stored syllables. There is evidence that syllable structure plays an essential role in the phonological encoding of words (e.g., online syllabification and phonological word formation). There is also evidence that syllables are retrieved as whole units. However, findings that clearly pinpoint these effects to specific levels in speech planning are scarce. We used a naming variant of the implicit priming paradigm to contrast voice onset latencies for frequency-manipulated disyllabic Dutch pseudo-words. While prior implicit priming studies only manipulated the item's form and/or syllable structure overlap we introduced syllable frequency as an additional factor. If the preparation effect for syllables obtained in the implicit priming paradigm proceeds beyond phonological planning, i.e., includes the retrieval of stored syllables, then the preparation effect should differ for high- and low frequency syllables. The findings reported here confirm this prediction: Low-frequency syllables benefit significantly more from the preparation than high-frequency syllables. Our findings support the notion of a mental syllabary at a post-lexical level, between the levels of phonological and phonetic encoding.
  • Cholin, J., Levelt, W. J. M., & Schiller, N. O. (2006). Effects of syllable frequency in speech production. Cognition, 99, 205-235. doi:10.1016/j.cognition.2005.01.009.

    Abstract

    In the speech production model proposed by [Levelt, W. J. M., Roelofs, A., Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, pp. 1-75.], syllables play a crucial role at the interface of phonological and phonetic encoding. At this interface, abstract phonological syllables are translated into phonetic syllables. It is assumed that this translation process is mediated by a so-called Mental Syllabary. Rather than constructing the motor programs for each syllable on-line, the mental syllabary is hypothesized to provide pre-compiled gestural scores for the articulators. In order to find evidence for such a repository, we investigated syllable-frequency effects: If the mental syllabary consists of retrievable representations corresponding to syllables, then the retrieval process should be sensitive to frequency differences. In a series of experiments using a symbol-position association learning task, we tested whether highfrequency syllables are retrieved and produced faster compared to low-frequency syllables. We found significant syllable frequency effects with monosyllabic pseudo-words and disyllabic pseudo-words in which the first syllable bore the frequency manipulation; no effect was found when the frequency manipulation was on the second syllable. The implications of these results for the theory of word form encoding at the interface of phonological and phonetic encoding; especially with respect to the access mechanisms to the mental syllabary in the speech production model by (Levelt et al.) are discussed.
  • Chu, M., & Kita, S. (2009). Co-speech gestures do not originate from speech production processes: Evidence from the relationship between co-thought and co-speech gestures. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society (pp. 591-595). Austin, TX: Cognitive Science Society.

    Abstract

    When we speak, we spontaneously produce gestures (co-speech gestures). Co-speech gestures and speech production are closely interlinked. However, the exact nature of the link is still under debate. To addressed the question that whether co-speech gestures originate from the speech production system or from a system independent of the speech production, the present study examined the relationship between co-speech and co-thought gestures. Co-thought gestures, produced during silent thinking without speaking, presumably originate from a system independent of the speech production processes. We found a positive correlation between the production frequency of co-thought and co-speech gestures, regardless the communicative function that co-speech gestures might serve. Therefore, we suggest that co-speech gestures and co-thought gestures originate from a common system that is independent of the speech production processes
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Collins, L. J., & Chen, X. S. (2009). Ancestral RNA: The RNA biology of the eukaryotic ancestor. RNA Biology, 6(5), 495-502. doi:10.4161/rna.6.5.9551.

    Abstract

    Our knowledge of RNA biology within eukaryotes has exploded over the last five years. Within new research we see that some features that were once thought to be part of multicellular life have now been identified in several protist lineages. Hence, it is timely to ask which features of eukaryote RNA biology are ancestral to all eukaryotes. We focus on RNA-based regulation and epigenetic mechanisms that use small regulatory ncRNAs and long ncRNAs, to highlight some of the many questions surrounding eukaryotic ncRNA evolution.
  • Cooper, N., & Cutler, A. (2004). Perception of non-native phonemes in noise. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 469-472). Seoul: Sunjijn Printing Co.

    Abstract

    We report an investigation of the perception of American English phonemes by Dutch listeners proficient in English. Listeners identified either the consonant or the vowel in most possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (16 dB, 8 dB, and 0 dB). Effects of signal-to-noise ratio on vowel and consonant identification are discussed as a function of syllable position and of relationship to the native phoneme inventory. Comparison of the results with previously reported data from native listeners reveals that noise affected the responding of native and non-native listeners similarly.
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Crasborn, O., Sloetjes, H., Auer, E., & Wittenburg, P. (2006). Combining video and numeric data in the analysis of sign languages with the ELAN annotation software. In C. Vetoori (Ed.), Proceedings of the 2nd Workshop on the Representation and Processing of Sign languages: Lexicographic matters and didactic scenarios (pp. 82-87). Paris: ELRA.

    Abstract

    This paper describes hardware and software that can be used for the phonetic study of sign languages. The field of sign language phonetics is characterised, and the hardware that is currently in use is described. The paper focuses on the software that was developed to enable the recording of finger and hand movement data, and the additions to the ELAN annotation software that facilitate the further visualisation and analysis of the data.
  • Cronin, K. A., Schroeder, K. K. E., Rothwell, E. S., Silk, J. B., & Snowdon, C. T. (2009). Cooperatively breeding cottontop tamarins (Saguinus oedipus) do not donate rewards to their long-term mates. Journal of Comparative Psychology, 123(3), 231-241. doi:10.1037/a0015094.

    Abstract

    This study tested the hypothesis that cooperative breeding facilitates the emergence of prosocial behavior by presenting cottontop tamarins (Saguinus oedipus) with the option to provide food rewards to pair-bonded mates. In Experiment 1, tamarins could provide rewards to mates at no additional cost while obtaining rewards for themselves. Contrary to the hypothesis, tamarins did not demonstrate a preference to donate rewards, behaving similar to chimpanzees in previous studies. In Experiment 2, the authors eliminated rewards for the donor for a stricter test of prosocial behavior, while reducing separation distress and food preoccupation. Again, the authors found no evidence for a donation preference. Furthermore, tamarins were significantly less likely to deliver rewards to mates when the mate displayed interest in the reward. The results of this study contrast with those recently reported for cooperatively breeding common marmosets, and indicate that prosocial preferences in a food donation task do not emerge in all cooperative breeders. In previous studies, cottontop tamarins have cooperated and reciprocated to obtain food rewards; the current findings sharpen understanding of the boundaries of cottontop tamarins’ food-provisioning behavior.
  • Cronin, K. A., Mitchell, M. A., Lonsdorf, E. V., & Thompson, S. D. (2006). One year later: Evaluation of PMC-Recommended births and transfers. Zoo Biology, 25, 267-277. doi:10.1002/zoo.20100.

    Abstract

    To meet their exhibition, conservation, education, and scientific goals, members of the American Zoo and Aquarium Association (AZA) collaborate to manage their living collections as single species populations. These cooperative population management programs, Species Survival Planss (SSP) and Population Management Plans (PMP), issue specimen-by-specimen recommendations aimed at perpetuating captive populations by maintaining genetic diversity and demographic stability. Species Survival Plans and PMPs differ in that SSP participants agree to complete recommendations, whereas PMP participants need only take recommendations under advisement. We evaluated the effect of program type and the number of participating institutions on the success of actions recommended by the Population Management Center (PMC): transfers of specimens between institutions, breeding, and target number of offspring. We analyzed AZA studbook databases for the occurrence of recommended or unrecommended transfers and births during the 1-year period after the distribution of standard AZA Breeding-and-Transfer Plans. We had three major findings: 1) on average, both SSPs and PMPs fell about 25% short of their target; however, as the number of participating institutions increased so too did the likelihood that programs met or exceeded their target; 2) SSPs exhibited significantly greater transfer success than PMPs, although transfer success for both program types was below 50%; and 3) SSPs exhibited significantly greater breeding success than PMPs, although breeding success for both program types was below 20%. Together, these results indicate that the science and sophistication behind genetic and demographic management of captive populations may be compromised by the challenges of implementation.
  • Cutler, A., Norris, D., & Sebastián-Gallés, N. (2004). Phonemic repertoire and similarity within the vocabulary. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 65-68). Seoul: Sunjijn Printing Co.

    Abstract

    Language-specific differences in the size and distribution of the phonemic repertoire can have implications for the task facing listeners in recognising spoken words. A language with more phonemes will allow shorter words and reduced embedding of short words within longer ones, decreasing the potential for spurious lexical competitors to be activated by speech signals. We demonstrate that this is the case via comparative analyses of the vocabularies of English and Spanish. A language which uses suprasegmental as well as segmental contrasts, however, can substantially reduce the extent of spurious embedding.
  • Cutler, A. (2006). Rudolf Meringer. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 8) (pp. 12-13). Amsterdam: Elsevier.

    Abstract

    Rudolf Meringer (1859–1931), Indo-European philologist, published two collections of slips of the tongue, annotated and interpreted. From 1909, he was the founding editor of the cultural morphology movement's journal Wörter und Sachen. Meringer was the first to note the linguistic significance of speech errors, and his interpretations have stood the test of time. This work, rather than his mainstream philological research, has proven his most lasting linguistic contribution
  • Cutler, A. (2004). Segmentation of spoken language by normal adult listeners. In R. Kent (Ed.), MIT encyclopedia of communication sciences and disorders (pp. 392-395). Cambridge, MA: MIT Press.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., Kim, J., & Otake, T. (2006). On the limits of L1 influence on non-L1 listening: Evidence from Japanese perception of Korean. In P. Warren, & C. I. Watson (Eds.), Proceedings of the 11th Australian International Conference on Speech Science & Technology (pp. 106-111).

    Abstract

    Language-specific procedures which are efficient for listening to the L1 may be applied to non-native spoken input, often to the detriment of successful listening. However, such misapplications of L1-based listening do not always happen. We propose, based on the results from two experiments in which Japanese listeners detected target sequences in spoken Korean, that an L1 procedure is only triggered if requisite L1 features are present in the input.
  • Cutler, A., & Henton, C. G. (2004). There's many a slip 'twixt the cup and the lip. In H. Quené, & V. Van Heuven (Eds.), On speech and Language: Studies for Sieb G. Nooteboom (pp. 37-45). Utrecht: Netherlands Graduate School of Linguistics.

    Abstract

    The retiring academic may look back upon, inter alia, years of conference attendance. Speech error researchers are uniquely fortunate because they can collect data in any situation involving communication; accordingly, the retiring speech error researcher will have collected data at those conferences. We here address the issue of whether error data collected in situations involving conviviality (such as at conferences) is representative of error data in general. Our approach involved a comparison, across three levels of linguistic processing, between a specially constructed Conviviality Sample and the largest existing source of speech error data, the newly available Fromkin Speech Error Database. The results indicate that there are grounds for regarding the data in the Conviviality Sample as a better than average reflection of the true population of all errors committed. These findings encourage us to recommend further data collection in collaboration with like-minded colleagues.
  • Cutler, A. (2004). Twee regels voor academische vorming. In H. Procee (Ed.), Bij die wereld wil ik horen! Zesendertig columns en drie essays over de vorming tot academicus. (pp. 42-45). Amsterdam: Boom.
  • Cutler, A. (2006). Van spraak naar woorden in een tweede taal. In J. Morais, & G. d'Ydewalle (Eds.), Bilingualism and Second Language Acquisition (pp. 39-54). Brussels: Koninklijke Vlaamse Academie van België voor Wetenschappen en Kunsten.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Cutler, A., & Pasveer, D. (2006). Explaining cross-linguistic differences in effects of lexical stress on spoken-word recognition. In R. Hoffmann, & H. Mixdorff (Eds.), Speech Prosody 2006. Dresden: TUD press.

    Abstract

    Experiments have revealed differences across languages in listeners’ use of stress information in recognising spoken words. Previous comparisons of the vocabulary of Spanish and English had suggested that the explanation of this asymmetry might lie in the extent to which considering stress in spokenword recognition allows rejection of unwanted competition from words embedded in other words. This hypothesis was tested on the vocabularies of Dutch and German, for which word recognition results resemble those from Spanish more than those from English. The vocabulary statistics likewise revealed that in each language, the reduction of embeddings resulting from taking stress into account is more similar to the reduction achieved in Spanish than in English.
  • Cutler, A., Eisner, F., McQueen, J. M., & Norris, D. (2006). Coping with speaker-related variation via abstract phonemic categories. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 31-32).
  • Cutler, A., Weber, A., & Otake, T. (2006). Asymmetric mapping from phonetic to lexical representations in second-language listening. Journal of Phonetics, 34(2), 269-284. doi:10.1016/j.wocn.2005.06.002.

    Abstract

    The mapping of phonetic information to lexical representations in second-language (L2) listening was examined using an eyetracking paradigm. Japanese listeners followed instructions in English to click on pictures in a display. When instructed to click on a picture of a rocket, they experienced interference when a picture of a locker was present, that is, they tended to look at the locker instead. However, when instructed to click on the locker, they were unlikely to look at the rocket. This asymmetry is consistent with a similar asymmetry previously observed in Dutch listeners’ mapping of English vowel contrasts to lexical representations. The results suggest that L2 listeners may maintain a distinction between two phonetic categories of the L2 in their lexical representations, even though their phonetic processing is incapable of delivering the perceptual discrimination required for correct mapping to the lexical distinction. At the phonetic processing level, one of the L2 categories is dominant; the present results suggest that dominance is determined by acoustic–phonetic proximity to the nearest L1 category. At the lexical processing level, representations containing this dominant category are more likely than representations containing the non-dominant category to be correctly contacted by the phonetic input.
  • Cutler, A., Mister, E., Norris, D., & Sebastián-Gallés, N. (2004). La perception de la parole en espagnol: Un cas particulier? In L. Ferrand, & J. Grainger (Eds.), Psycholinguistique cognitive: Essais en l'honneur de Juan Segui (pp. 57-74). Brussels: De Boeck.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A. (2009). Greater sensitivity to prosodic goodness in non-native than in native listeners. Journal of the Acoustical Society of America, 125, 3522-3525. doi:10.1121/1.3117434.

    Abstract

    English listeners largely disregard suprasegmental cues to stress in recognizing words. Evidence for this includes the demonstration of Fear et al. [J. Acoust. Soc. Am. 97, 1893–1904 (1995)] that cross-splicings are tolerated between stressed and unstressed full vowels (e.g., au- of autumn, automata). Dutch listeners, however, do exploit suprasegmental stress cues in recognizing native-language words. In this study, Dutch listeners were presented with English materials from the study of Fear et al. Acceptability ratings by these listeners revealed sensitivity to suprasegmental mismatch, in particular, in replacements of unstressed full vowels by higher-stressed vowels, thus evincing greater sensitivity to prosodic goodness than had been shown by the original native listener group.
  • Cutler, A., Davis, C., & Kim, J. (2009). Non-automaticity of use of orthographic knowledge in phoneme evaluation. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 380-383). Causal Productions Pty Ltd.

    Abstract

    Two phoneme goodness rating experiments addressed the role of orthographic knowledge in the evaluation of speech sounds. Ratings for the best tokens of /s/ were higher in words spelled with S (e.g., bless) than in words where /s/ was spelled with C (e.g., voice). This difference did not appear for analogous nonwords for which every lexical neighbour had either S or C spelling (pless, floice). Models of phonemic processing incorporating obligatory influence of lexical information in phonemic processing cannot explain this dissociation; the data are consistent with models in which phonemic decisions are not subject to necessary top-down lexical influence.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (2009). Psycholinguistics in our time. In P. Rabbitt (Ed.), Inside psychology: A science over 50 years (pp. 91-101). Oxford: Oxford University Press.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A., Otake, T., & McQueen, J. M. (2009). Vowel devoicing and the perception of spoken Japanese words. Journal of the Acoustical Society of America, 125(3), 1693-1703. doi:10.1121/1.3075556.

    Abstract

    Three experiments, in which Japanese listeners detected Japanese words embedded in nonsense sequences, examined the perceptual consequences of vowel devoicing in that language. Since vowelless sequences disrupt speech segmentation [Norris et al. (1997). Cognit. Psychol. 34, 191– 243], devoicing is potentially problematic for perception. Words in initial position in nonsense sequences were detected more easily when followed by a sequence containing a vowel than by a vowelless segment (with or without further context), and vowelless segments that were potential devoicing environments were no easier than those not allowing devoicing. Thus asa, “morning,” was easier in asau or asazu than in all of asap, asapdo, asaf, or asafte, despite the fact that the /f/ in the latter two is a possible realization of fu, with devoiced [u]. Japanese listeners thus do not treat devoicing contexts as if they always contain vowels. Words in final position in nonsense sequences, however, produced a different pattern: here, preceding vowelless contexts allowing devoicing impeded word detection less strongly (so, sake was detected less accurately, but not less rapidly, in nyaksake—possibly arising from nyakusake—than in nyagusake). This is consistent with listeners treating consonant sequences as potential realizations of parts of existing lexical candidates wherever possible.
  • Dabrowska, E., Rowland, C. F., & Theakston, A. (2009). The acquisition of questions with long-distance dependencies. Cognitive Linguistics, 20(3), 571-597. doi:10.1515/COGL.2009.025.

    Abstract

    A number of researchers have claimed that questions and other constructions with long distance dependencies (LDDs) are acquired relatively early, by age 4 or even earlier, in spite of their complexity. Analysis of LDD questions in the input available to children suggests that they are extremely stereotypical, raising the possibility that children learn lexically specific templates such as WH do you think S-GAP? rather than general rules of the kind postulated in traditional linguistic accounts of this construction. We describe three elicited imitation experiments with children aged from 4;6 to 6;9 and adult controls. Participants were asked to repeat prototypical questions (i.e., questions which match the hypothesised template), unprototypical questions (which depart from it in several respects) and declarative counterparts of both types of interrogative sentences. The children performed significantly better on the prototypical variants of both constructions, even when both variants contained exactly the same lexical material, while adults showed prototypicality e¤ects for LDD questions only. These results suggest that a general declarative complementation construction emerges quite late in development (after age 6), and that even adults rely on lexically specific templates for LDD questions.
  • Dahan, D., & Tanenhaus, M. K. (2004). Continuous mapping from sound to meaning in spoken-language comprehension: Immediate effects of verb-based thematic constraints. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 498-513. doi:10.1037/0278-7393.30.2.498.

    Abstract

    The authors used 2 “visual-world” eye-tracking experiments to examine lexical access using Dutch constructions in which the verb did or did not place semantic constraints on its subsequent subject noun phrase. In Experiment 1, fixations to the picture of a cohort competitor (overlapping with the onset of the referent’s name, the subject) did not differ from fixations to a distractor in the constraining-verb condition. In Experiment 2, cross-splicing introduced phonetic information that temporarily biased the input toward the cohort competitor. Fixations to the cohort competitor temporarily increased in both the neutral and constraining conditions. These results favor models in which mapping from the input onto meaning is continuous over models in which contextual effects follow access of an initial form-based competitor set.
  • Dalli, A., Tablan, V., Bontcheva, K., Wilks, Y., Broeder, D., Brugman, H., & Wittenburg, P. (2004). Web services architecture for language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC2004) (pp. 365-368). Paris: ELRA - European Language Resources Association.
  • Davids, N. (2009). Neurocognitive markers of phonological processing: A clinical perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Davids, N., Van den Brink, D., Van Turennout, M., Mitterer, H., & Verhoeven, L. (2009). Towards neurophysiological assessment of phonemic discrimination: Context effects of the mismatch negativity. Clinical Neurophysiology, 120, 1078-1086. doi:10.1016/j.clinph.2009.01.018.

    Abstract

    This study focusses on the optimal paradigm for simultaneous assessment of auditory and phonemic discrimination in clinical populations. We investigated (a) whether pitch and phonemic deviants presented together in one sequence are able to elicit mismatch negativities (MMNs) in healthy adults and (b) whether MMN elicited by a change in pitch is modulated by the presence of the phonemic deviants.
  • Davidson, D. J. (2006). Strategies for longitudinal neurophysiology [commentary on Osterhout et al.]. Language Learning, 56(suppl. 1), 231-234. doi:10.1111/j.1467-9922.2006.00362.x.
  • Davidson, D. J., & Indefrey, P. (2009). An event-related potential study on changes of violation and error responses during morphosyntactic learning. Journal of Cognitive Neuroscience, 21(3), 433-446. Retrieved from http://www.mitpressjournals.org/doi/pdf/10.1162/jocn.2008.21031.

    Abstract

    Based on recent findings showing electrophysiological changes in adult language learners after relatively short periods of training, we hypothesized that adult Dutch learners of German would show responses to German gender and adjective declension violations after brief instruction. Adjective declension in German differs from previously studied morphosyntactic regularities in that the required suffixes depend not only on the syntactic case, gender, and number features to be expressed, but also on whether or not these features are already expressed on linearly preceding elements in the noun phrase. Violation phrases and matched controls were presented over three test phases (pretest and training on the first day, and a posttest one week later). During the pretest, no electrophysiological differences were observed between violation and control conditions, and participants’ classification performance was near chance. During the training and posttest phases, classification improved, and there was a P600-like violation response to declension but not gender violations. An error-related response during training was associated with improvement in grammatical discrimination from pretest to posttest. The results show that rapid changes in neuronal responses can be observed in adult learners of a complex morphosyntactic rule, and also that error-related electrophysiological responses may relate to grammar acquisition.
  • Davidson, D. J., & Indefrey, P. (2009). Plasticity of grammatical recursion in German learners of Dutch. Language and Cognitive Processes, 24, 1335-1369. doi:10.1080/01690960902981883.

    Abstract

    Previous studies have examined cross-serial and embedded complement clauses in West Germanic in order to distinguish between different types of working memory models of human sentence processing, as well as different formal language models. Here, adult plasticity in the use of these constructions is investigated by examining the response of German-speaking learners of Dutch using magnetoencephalography (MEG). In three experimental sessions spanning their initial acquisition of Dutch, participants performed a sentence-scene matching task with Dutch sentences including two different verb constituent orders (Dutch verb order, German verb order), and in addition rated similar constructions in a separate rating task. The average planar gradient of the evoked field to the initial verb within the cluster revealed a larger evoked response for the German order relative to the Dutch order between 0.2 to 0.4 s over frontal sensors after 2 weeks, but not initially. The rating data showed that constructions consistent with Dutch grammar, but inconsistent with the German grammar were initially rated as unacceptable, but this preference reversed after 3 months. The behavioural and electrophysiological results suggest that cortical responses to verb order preferences in complement clauses can change within 3 months after the onset of adult language learning, implying that this aspect of grammatical processing remains plastic into adulthood.
  • Davies, R., Kidd, E., & Lander, K. (2009). Investigating the psycholinguistic correlates of speechreading in preschool age children. International Journal of Language & Communication Disorders, 44(2), 164-174. doi:10.1080/13682820801997189.

    Abstract

    Background: Previous research has found that newborn infants can match phonetic information in the lips and voice from as young as ten weeks old. There is evidence that access to visual speech is necessary for normal speech development. Although we have an understanding of this early sensitivity, very little research has investigated older children's ability to speechread whole words. Aims: The aim of this study was to identify aspects of preschool children's linguistic knowledge and processing ability that may contribute to speechreading ability. We predicted a significant correlation between receptive vocabulary and speechreading, as well as phonological working memory to be a predictor of speechreading performance. Methods & Procedures: Seventy-six children (n = 76) aged between 2;10 and 4;11 years participated. Children were given three pictures and were asked to point to the picture that they thought that the experimenter had silently mouthed (ten trials). Receptive vocabulary and phonological working memory were also assessed. The results were analysed using Pearson correlations and multiple regressions. Outcomes & Results: The results demonstrated that the children could speechread at a rate greater than chance. Pearson correlations revealed significant, positive correlations between receptive vocabulary and speechreading score, phonological error rate and age. Further correlations revealed significant, positive relationships between The Children's Test of Non-Word Repetition (CNRep) and speechreading score, phonological error rate and age. Multiple regression analyses showed that receptive vocabulary best predicts speechreading ability over and above phonological working memory. Conclusions & Implications: The results suggest that preschool children are capable of speechreading, and that this ability is related to vocabulary size. This suggests that children aged between 2;10 and 4;11 are sensitive to visual information in the form of audio-visual mappings. We suggest that current and future therapies are correct to include visual feedback as a therapeutic tool; however, future research needs to be conducted in order to elucidate further the role of speechreading in development.
  • Dediu, D. (2009). Genetic biasing through cultural transmission: Do simple Bayesian models of language evolution generalize? Journal of Theoretical Biology, 259, 552-561. doi:10.1016/j.jtbi.2009.04.004.

    Abstract

    The recent Bayesian approaches to language evolution and change seem to suggest that genetic biases can impact on the characteristics of language, but, at the same time, that its cultural transmission can partially free it from these same genetic constraints. One of the current debates centres on the striking differences between sampling and a posteriori maximising Bayesian learners, with the first converging on the prior bias while the latter allows a certain freedom to language evolution. The present paper shows that this difference disappears if populations more complex than a single teacher and a single learner are considered, with the resulting behaviours more similar to the sampler. This suggests that generalisations based on the language produced by Bayesian agents in such homogeneous single agent chains are not warranted. It is not clear which of the assumptions in such models are responsible, but these findings seem to support the rising concerns on the validity of the “acquisitionist” assumption, whereby the locus of language change and evolution is taken to be the first language acquirers (children) as opposed to the competent language users (the adults).
  • Dediu, D. (2006). Mostly out of Africa, but what did the others have to say? In A. Cangelosi, A. D. Smith, & K. Smith (Eds.), The evolution of language: proceedings of the 6th International Conference (EVOLANG6) (pp. 59-66). World Scientific.

    Abstract

    The Recent Out-of-Africa human evolutionary model seems to be generally accepted. This impression is very prevalent outside palaeoanthropological circles (including studies of language evolution), but proves to be unwarranted. This paper offers a short review of the main challenges facing ROA and concludes that alternative models based on the concept of metapopulation must be also considered. The implications of such a model for language evolution and diversity are briefly reviewed.
  • Den Os, E., & Boves, L. (2004). Natural multimodal interaction for design applications. In P. Cunningham (Ed.), Adoption and the knowledge economy (pp. 1403-1410). Amsterdam: IOS Press.
  • Desmet, T., De Baecke, C., Drieghe, D., Brysbaert, M., & Vonk, W. (2006). Relative clause attachment in Dutch: On-line comprehension corresponds to corpus frequencies when lexical variables are taken into account. Language and Cognitive Processes, 21(4), 453-485. doi:10.1080/01690960400023485.

    Abstract

    Desmet, Brysbaert, and De Baecke (2002a) showed that the production of relative clauses following two potential attachment hosts (e.g., ‘Someone shot the servant of the actress who was on the balcony’) was influenced by the animacy of the first host. These results were important because they refuted evidence from Dutch against experience-based accounts of syntactic ambiguity resolution, such as the tuning hypothesis. However, Desmet et al. did not provide direct evidence in favour of tuning, because their study focused on production and did not include reading experiments. In the present paper this line of research was extended. A corpus analysis and an eye-tracking experiment revealed that when taking into account lexical properties of the NP host sites (i.e., animacy and concreteness) the frequency pattern and the on-line comprehension of the relative clause attachment ambiguity do correspond. The implications for exposure-based accounts of sentence processing are discussed.
  • Dietrich, C. (2006). The acquisition of phonological structure: Distinguishing contrastive from non-contrastive variation. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.57829.
  • Dimitriadis, A., Kemps-Snijders, M., Wittenburg, P., Everaert, M., & Levinson, S. C. (2006). Towards a linguist's workbench supporting eScience methods. In Proceedings of the 2nd IEEE International Conference on e-Science and Grid Computing.
  • Dimitrova, D. V., Redeker, G., & Hoeks, J. C. J. (2009). Did you say a BLUE banana? The prosody of contrast and abnormality in Bulgarian and Dutch. In 10th Annual Conference of the International Speech Communication Association [Interspeech 2009] (pp. 999-1002). ISCA Archive.

    Abstract

    In a production experiment on Bulgarian that was based on a previous study on Dutch [1], we investigated the role of prosody when linguistic and extra-linguistic information coincide or contradict. Speakers described abnormally colored fruits in conditions where contrastive focus and discourse relations were varied. We found that the coincidence of contrast and abnormality enhances accentuation in Bulgarian as it did in Dutch. Surprisingly, when both factors are in conflict, the prosodic prominence of abnormality often overruled focus accentuation in both Bulgarian and Dutch, though the languages also show marked differences.
  • Dimroth, C., & Narasimhan, B. (2009). Accessibility and topicality in children's use of word order. In J. Chandlee, M. Franchini, S. Lord, & G. M. Rheiner (Eds.), Proceedings of the 33rd annual Boston University Conference on Language Development (BULCD) (pp. 133-138).
  • Dimroth, C., & Klein, W. (2009). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, 153, 5-9.

Share this page