Publications

Displaying 101 - 200 of 1044
  • Broersma, M. (2012). Lexical representation of perceptually difficult second-language words [Abstract]. Program abstracts from the 164th Meeting of the Acoustical Society of America published in the Journal of the Acoustical Society of America, 132(3), 2053.

    Abstract

    This study investigates the lexical representation of second-language words that contain difficult to distinguish phonemes. Dutch and English listeners' perception of partially onset-overlapping word pairs like DAFFOdil-DEFIcit and minimal pairs like flash-flesh, was assessed with two cross-modal priming experiments, examining two stages of lexical processing: activation of intended and mismatching lexical representations (Exp.1) and competition between those lexical representations (Exp.2). Exp.1 shows that truncated primes like daffo- and defi- activated lexical representations of mismatching words (either deficit or daffodil) more for L2 than L1 listeners. Exp.2 shows that for minimal pairs, matching primes (prime: flash, target: FLASH) facilitated recognition of visual targets for L1 and L2 listeners alike, whereas mismatching primes (flesh, FLASH) inhibited recognition consistently for L1 listeners but only in a minority of cases for L2 listeners; in most cases, for them, primes facilitated recognition of both words equally strongly. Importantly, all listeners experienced a combination of facilitation and inhibition (and all items sometimes caused facilitation and sometimes inhibition). These results suggest that for all participants, some of the minimal pairs were represented with separate, native-like lexical representations, whereas other pairs were stored as homophones. The nature of the L2 lexical representations thus varied strongly even within listeners.
  • Brookshire, G., & Casasanto, D. (2012). Motivation and motor control: Hemispheric specialization for approach motivation reverses with handedness. PLoS One, 7(4), e36036. doi:10.1371/journal.pone.0036036.

    Abstract

    Background: According to decades of research on affective motivation in the human brain, approach motivational states are supported primarily by the left hemisphere and avoidance states by the right hemisphere. The underlying cause of this specialization, however, has remained unknown. Here we conducted a first test of the Sword and Shield Hypothesis (SSH), according to which the hemispheric laterality of affective motivation depends on the laterality of motor control for the dominant hand (i.e., the "sword hand," used preferentially to perform approach actions) and the nondominant hand (i.e., the "shield hand," used preferentially to perform avoidance actions). Methodology/Principal Findings: To determine whether the laterality of approach motivation varies with handedness, we measured alpha-band power (an inverse index of neural activity) in right- and left-handers during resting-state electroencephalography and analyzed hemispheric alpha-power asymmetries as a function of the participants' trait approach motivational tendencies. Stronger approach motivation was associated with more left-hemisphere activity in right-handers, but with more right-hemisphere activity in left-handers. Conclusions: The hemispheric correlates of approach motivation reversed between right- and left-handers, consistent with the way they typically use their dominant and nondominant hands to perform approach and avoidance actions. In both right- and left-handers, approach motivation was lateralized to the same hemisphere that controls the dominant hand. This covariation between neural systems for action and emotion provides initial support for the SSH
  • Brouwer, S., Mitterer, H., & Huettig, F. (2012). Speech reductions change the dynamics of competition during spoken word recognition. Language and Cognitive Processes, 27(4), 539-571. doi:10.1080/01690965.2011.555268.

    Abstract

    Three eye-tracking experiments investigated how phonological reductions (e.g., ‘‘puter’’ for ‘‘computer’’) modulate phonological competition. Participants listened to sentences extracted from a pontaneous speech corpus and saw four printed words: a target (e.g., ‘‘computer’’), a competitor similar to the canonical form (e.g., ‘‘companion’’), one similar to the reduced form (e.g.,
    ‘‘pupil’’), and an unrelated distractor. In Experiment 1, we presented canonical and reduced forms in a syllabic and in a sentence context. Listeners directed
    their attention to a similar degree to both competitors independent of the
    target’s spoken form. In Experiment 2, we excluded reduced forms and
    presented canonical forms only. In such a listening situation, participants
    showed a clear preference for the ‘‘canonical form’’ competitor. In Experiment 3, we presented canonical forms intermixed with reduced forms in a sentence context and replicated the competition pattern of Experiment 1. These data suggest that listeners penalize acoustic mismatches less strongly when listeningto reduced speech than when listening to fully articulated speech. We conclude
    that flexibility to adjust to speech-intrinsic factors is a key feature of the spoken word recognition system.
  • Brouwer, S., Mitterer, H., & Huettig, F. (2012). Can hearing puter activate pupil? Phonological competition and the processing of reduced spoken words in spontaneous conversations. Quarterly Journal of Experimental Psychology, 65, 2193-2220. doi:10.1080/17470218.2012.693109.

    Abstract

    In listeners' daily communicative exchanges, they most often hear casual speech, in which words are often produced with fewer segments, rather than the careful speech used in most psycholinguistic experiments. Three experiments examined phonological competition during the recognition of reduced forms such as [pjutər] for computer using a target-absent variant of the visual world paradigm. Listeners' eye movements were tracked upon hearing canonical and reduced forms as they looked at displays of four printed words. One of the words was phonologically similar to the canonical pronunciation of the target word, one word was similar to the reduced pronunciation, and two words served as unrelated distractors. When spoken targets were presented in isolation (Experiment 1) and in sentential contexts (Experiment 2), competition was modulated as a function of the target word form. When reduced targets were presented in sentential contexts, listeners were probabilistically more likely to first fixate reduced-form competitors before shifting their eye gaze to canonical-form competitors. Experiment 3, in which the original /p/ from [pjutər] was replaced with a “real” onset /p/, showed an effect of cross-splicing in the late time window. We conjecture that these results fit best with the notion that speech reductions initially activate competitors that are similar to the phonological surface form of the reduction, but that listeners nevertheless can exploit fine phonetic detail to reconstruct strongly reduced forms to their canonical counterparts.
  • Brouwer, H., Fitz, H., & Hoeks, J. (2012). Getting real about semantic illusions: Rethinking the functional role of the P600 in language comprehension. Brain Research, 1446, 127-143. doi:10.1016/j.brainres.2012.01.055.

    Abstract

    In traditional theories of language comprehension, syntactic and semantic processing are inextricably linked. This assumption has been challenged by the ‘Semantic Illusion Effect’ found in studies using Event Related brain Potentials. Semantically anomalous sentences did not produce the expected increase in N400 amplitude but rather one in P600 amplitude. To explain these findings, complex models have been devised in which an independent semantic processing stream can arrive at a sentence interpretation that may differ from the interpretation prescribed by the syntactic structure of the sentence. We review five such multi-stream models and argue that they do not account for the full range of relevant results because they assume that the amplitude of the N400 indexes some form of semantic integration. Based on recent evidence we argue that N400 amplitude might reflect the retrieval of lexical information from memory. On this view, the absence of an N400-effect in Semantic Illusion sentences can be explained in terms of priming. Furthermore, we suggest that semantic integration, which has previously been linked to the N400 component, might be reflected in the P600 instead. When combined, these functional interpretations result in a single-stream account of language processing that can explain all of the Semantic Illusion data.
  • Brouwer, S., Van Engen, K. J., Calandruccio, L., & Bradlow, A. R. (2012). Linguistic contributions to speech-on-speech masking for native and non-native listeners: Language familiarity and semantic content. The Journal of the Acoustical Society of America, 131(2), 1449-1464. doi:10.1121/1.3675943.

    Abstract

    This study examined whether speech-on-speech masking is sensitive to variation in the degree of similarity between the target and the masker speech. Three experiments investigated whether speech-in-speech recognition varies across different background speech languages (English vs Dutch) for both English and Dutch targets, as well as across variation in the semantic content of the background speech (meaningful vs semantically anomalous sentences), and across variation in listener status vis-à-vis the target and masker languages (native, non-native, or unfamiliar). The results showed that the more similar the target speech is to the masker speech (e.g., same vs different language, same vs different levels of semantic content), the greater the interference on speech recognition accuracy. Moreover, the listener’s knowledge of the target and the background language modulate the size of the release from masking. These factors had an especially strong effect on masking effectiveness in highly unfavorable listening conditions. Overall this research provided evidence that that the degree of target-masker similarity plays a significant role in speech-in-speech recognition. The results also give insight into how listeners assign their resources differently depending on whether they are listening to their first or second language
  • Brown, P. (2007). Principles of person reference in Tzeltal conversation. In N. Enfield, & T. Stivers (Eds.), Person reference in interaction: Linguistic, cultural, and social perspectives (pp. 172-202). Cambridge: Cambridge University Press.

    Abstract

    This paper focuses on ‘minimality’ in initial references to persons in the Mayan language Tzeltal, spoken in southern Mexico. Inspection of initial person-referring expressions in 25 Tzeltal videotaped conversations reveals that, in this language, if speaker and/or recipient are related through ‘kinship’ to the referent, a kin term (or other relational term like ‘namesake’) is the default option for initial reference to persons. Additionally, further specification via names and/or geographical location (of home base) is also often used to home in on the referent (e.g. ‘your-cousin Alonzo’, ‘our mother’s brother behind the mountain’). And often (~ 70 cases in the data examined) initial references to persons combine more than one referring expression, for example: ‘this old man my brother-in-law old man Antonio here in the pines’, or ‘the father of that brother-in-law of yours the father-in-law of your elder-sister Xmaruch’. Seen in the light of Schegloff’s (1979, 1996) two basic preferences for referring to persons in conversation: (i.) for a recognitional form and (ii.) for a minimal form, these Tzeltal person-referring expressions seem to be relatively elaborated. This paper examines the sequential contexts where such combinations appear, and proposes a third preference operative in Tzeltal (and possibly in other kinship-term-based systems) for associating the referent as closely as possible to the participants.
  • Brown, C. M., & Hagoort, P. (1989). De LAT-relatie tussen lichaam en geest: Over de implicaties van neurowetenschap voor onze kennis van cognitie. In C. Brown, P. Hagoort, & T. Meijering (Eds.), Vensters op de geest: Cognitie op het snijvlak van filosofie en psychologie (pp. 50-81). Utrecht: Grafiet.
  • Brown, P. (1983). [Review of the book Conversational routine: Explorations in standardized communication situations and prepatterned speech ed. by Florian Coulmas]. Language, 59, 215-219.
  • Brown, P. (1989). [Review of the book Language, gender, and sex in comparative perspective ed. by Susan U. Philips, Susan Steeleand Christine Tanz]. Man, 24(1), 192.
  • Brown, P. (1983). [Review of the books Mayan Texts I, II, and III ed. by Louanna Furbee-Losee]. International Journal of American Linguistics, 49, 337-341.
  • Brown, A. (2007). Crosslinguistic influence in first and second languages: Convergence in speech and gesture. PhD Thesis, Boston University, Boston.

    Abstract

    Research on second language acquisition typically focuses on how a first language (L1) influences a second language (L2) in different linguistic domains and across modalities. This dissertation, in contrast, explores interactions between languages in the mind of a language learner by asking 1) can an emerging L2 influence an established L1? 2) if so, how is such influence realized? 3) are there parallel influences of the L1 on the L2? These questions were investigated for the expression of Manner (e.g. climb, roll) and Path (e.g. up, down) of motion, areas where substantial crosslinguistic differences exist in speech and co-speech gesture. Japanese and English are typologically distinct in this domain; therefore, narrative descriptions of four motion events were elicited from monolingual Japanese speakers (n=16), monolingual English speakers (n=13), and native Japanese speakers with intermediate knowledge of English (narratives elicited in both their L1 and L2, n=28). Ways in which Path and Manner were expressed at the lexical, syntactic, and gestural levels were analyzed in monolingual and non-monolingual production. Results suggest mutual crosslinguistic influences. In their L1, native Japanese speakers with knowledge of English displayed both Japanese- and English-like use of morphosyntactic elements to express Path and Manner (i.e. a combination of verbs and other constructions). Consequently, non-monolingual L1 discourse contained significantly more Path expressions per clause, with significantly greater mention of Goal of motion than monolingual Japanese and English discourse. Furthermore, the gestures of non-monolingual speakers diverged from their monolingual counterparts with differences in depiction of Manner and gesture perspective (character versus observer). Importantly, non-monolingual production in the L1 was not ungrammatical, but simply reflected altered preferences. As for L2 production, many effects of L1 influence were seen, crucially in areas parallel to those described above. Overall, production by native Japanese speakers who knew English differed from that of monolingual Japanese and English speakers. But L1 and L2 production within non-monolingual individuals was similar. These findings imply a convergence of L1-L2 linguistic systems within the mind of a language learner. Theoretical and methodological implications for SLA research and language assessment with respect to the 'native speaker standard language' are discussed.
  • Brown, P. (2007). Culture-specific influences on semantic development Acquiring the Tzeltal 'benefactive' construction. In B. Pfeiler (Ed.), Learning indigenous languages: Child language acquisition in Mesoamerica (pp. 119-154). Mouton de Gruyter: Berlin.

    Abstract

    Three-place predicates are an important locus for examining how children acquire argument structure and how this process is influenced by the typology of the language they are learning as well as by culturally-specific semantic categories. From a typological perspective, there is reason to expect children to have some trouble expressing three-participant events, given the considerable variation across languages in how these are linguistically coded. Verbs of transfer (‘give’, ‘receive’, etc.) are often considered to be the verbs which canonically appear with three arguments (e.g., Slobin 1985, Gleitman 1990). Yet in the Mayan language Tzeltal, verbs other than transfer verbs appear routinely in the ditransitive construction. Although the three participants are rarely all overtly expressed as NPs, this construction ensures that the ‘recipient’ or or ‘affectee’ participant is overtly marked on the verb. Tzeltal children’s early acquisition of this construction (well before the age of 3;0) shows that they are sensitive to its abstract constructional meaning of ‘affected’ third participant: they do not go initially for ‘transfer’ meanings but are attuned to benefactive or malefactive uses despite the predominance of the verb ‘give’ in the input with this construction. This poses a challenge to acquisition theories (Goldberg 2001, Ninio 1999) that see construction meaning arising from the meaning of the verb most frequently used in a construction.
  • Brown, P. (2007). 'She had just cut/broken off her head': Cutting and breaking verbs in Tzeltal. Cognitive Linguistics, 18(2), 319-330. doi:10.1515/COG.2007.019.

    Abstract

    This paper describes the lexical resources for expressing events of cutting and breaking (C&B hereafter) in the Mayan language Tzeltal. This notional set of verbs is not a class in any grammatical sense; C&B verbs are formally undistinguishable from many other transitive state-change verbs. But they nicely reveal the characteristic specificity of Tzeltal verb semantics: C&B actions are finely differentiated according to the spatial and textural properties of the theme object, with no superordinate term meaning 'either cut in general' or 'break in general'. The paper characterizes the semantics of these verbs and shows that in the great majority of cases it does not predict their argument structure.
  • Brown, P., & Levinson, S. C. (2007). Gesichtsbedrohende Akte [reprint: Face-threatening acts, 1987]. In S. K. Herrmann, S. Kraemer, & H. Kuch (Eds.), Verletzende Worte: Die Grammatik sprachlicher Missachtung (pp. 59-88). Bielefeld: Transcript Verlag.

    Abstract

    This article is a reprint of parts of chapters 2 and 3 from Brown and Levinson (1987) discussing the concept of 'Face Threatening Acts'.
  • Brown, A., & Gullberg, M. (2012). Multicompetence and native speaker variation in clausal packaging in Japanese. Second Language Research, 28, 415-442. doi:10.1177/0267658312455822.

    Abstract

    This work was supported by the Max Planck Institute for Psycholinguistics and the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO; MPI 56-384, The Dynamics of Multilingual Processing, awarded to M Gullberg and P Indefrey).
  • Brown, P. (1980). How and why are women more polite: Some evidence from a Mayan community. In S. McConnell-Ginet, R. Borker, & N. Furman (Eds.), Women and language in literature and society (pp. 111-136). New York: Praeger.
  • Brown, P. (1997). Isolating the CVC root in Tzeltal Mayan: A study of children's first verbs. In E. V. Clark (Ed.), Proceedings of the 28th Annual Child Language Research Forum (pp. 41-52). Stanford, CA: CSLI/University of Chicago Press.

    Abstract

    How do children isolate the semantic package contained in verb roots in the Mayan language Tzeltal? One might imagine that the canonical CVC shape of roots characteristic of Mayan languages would make the job simple, but the root is normally preceded and followed by affixes which mask its identity. Pye (1983) demonstrated that, in Kiche' Mayan, prosodic salience overrides semantic salience, and children's first words in Kiche' are often composed of only the final (stressed) syllable constituted by the final consonant of the CVC root and a 'meaningless' termination suffix. Intonation thus plays a crucial role in early Kiche' morphological development. Tzeltal presents a rather different picture: The first words of children around the age of 1;6 are bare roots, children strip off all prefixes and suffixes which are obligatory in adult speech. They gradually add them, starting with the suffixes (which receive the main stress), but person prefixes are omitted in some contexts past a child's third birthday, and one obligatory aspectual prefix (x-) is systematically omitted by the four children in my longitudinal study even after they are four years old. Tzeltal children's first verbs generally show faultless isolation of the root. An account in terms of intonation or stress cannot explain this ability (the prefixes are not all syllables; the roots are not always stressed). This paper suggests that probable clues include the fact that the CVC root stays constant across contexts (with some exceptions) whereas the affixes vary, that there are some linguistic contexts where the root occurs without any prefixes (relatively frequent in the input), and that the Tzeltal discourse convention of responding by repeating with appropriate deictic alternation (e.g., "I see it." "Oh, you see it.") highlights the root.
  • Brown, P. (2012). Time and space in Tzeltal: Is the future uphill? Frontiers in Psychology, 3, 212. doi:10.3389/fpsyg.2012.00212.

    Abstract

    Linguistic expressions of time often draw on spatial language, which raises the question of whether cultural specificity in spatial language and cognition is reflected in thinking about time. In the Mayan language Tzeltal, spatial language relies heavily on an absolute frame of reference utilizing the overall slope of the land, distinguishing an “uphill/downhill” axis oriented from south to north, and an orthogonal “crossways” axis (sunrise-set) on the basis of which objects at all scales are located. Does this absolute system for calculating spa-tial relations carry over into construals of temporal relations? This question was explored in a study where Tzeltal consultants produced temporal expressions and performed two different non-linguistic temporal ordering tasks. The results show that at least five distinct schemata for conceptualizing time underlie Tzeltal linguistic expressions: (i) deictic ego-centered time, (ii) time as an ordered sequence (e.g., “first”/“later”), (iii) cyclic time (times of the day, seasons), (iv) time as spatial extension or location (e.g., “entering/exiting July”), and (v) a time vector extending uphillwards into the future. The non-linguistic task results showed that the “time moves uphillwards” metaphor, based on the absolute frame of reference prevalent in Tzeltal spatial language and thinking and important as well in the linguistic expressions for time, is not strongly reflected in responses on these tasks. It is argued that systematic and consistent use of spatial language in an absolute frame of reference does not necessarily transfer to consistent absolute time conceptualization in non-linguistic tasks; time appears to be more open to alternative construals.
  • Brown, P. (2012). To ‘put’ or to ‘take’? Verb semantics in Tzeltal placement and removal expressions. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 55-78). Amsterdam: Benjamins.

    Abstract

    This paper examines the verbs and other spatial vocabulary used for describing events of ‘putting’ and ‘taking’ in Tzeltal (Mayan). I discuss the semantics of different ‘put’ and ‘take’ verbs, the constructions they occur in, and the extensional patterns of verbs used in ‘put’ (Goal-oriented) vs. ‘take’ (Source-oriented) descriptions. A relatively limited role for semantically general verbs was found. Instead, Tzeltal is a ‘multiverb language’ with many different verbs usable to predicate ‘put’ and ‘take’ events, with verb choice largely determined by the shape, orientation, and resulting disposition of the Figure and Ground objects. The asymmetry that has been observed in other languages, with Goal-oriented ‘put’ verbs more finely distinguished lexically than Source-oriented ‘take’ verbs, is also apparent in Tzeltal.
  • Brucato, N., Mazières, S., Guitard, E., Giscard, P.-H., Bois, É., Larrouy, G., & Dugoujon, J.-M. (2012). The Hmong diaspora: Preserved South-East Asian genetic ancestry in French Guianese Asians. Comptes Rendus Biologies, 335, 698-707. doi:10.1016/j.crvi.2012.10.003.

    Abstract

    The Hmong Diaspora is one of the widest modern human migrations. Mainly localised in South-East Asia, the United States of America, and metropolitan France, a small community has also settled the Amazonian forest of French Guiana. We have biologically analysed 62 individuals of this unique Guianese population through three complementary genetic markers: mitochondrial DNA (HVS-I/II and coding region SNPs), Y-chromosome (SNPs and STRs), and the Gm allotypic system. All genetic systems showed a high conservation of the Asian gene pool (Asian ancestry: mtDNA = 100.0%; NRY = 99.1%; Gm = 96.6%), without a trace of founder effect. When compared across various Asian populations, the highest correlations were observed with Hmong-Mien groups still living in South-East Asia (Fst < 0.05; P-value < 0.05). Despite a long history punctuated by exodus, the French Guianese Hmong have maintained their original genetic diversity.
  • Burenhult, N. (2012). The linguistic encoding of placement and removal events in Jahai. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 21-36). Amsterdam: Benjamins.

    Abstract

    This paper explores the linguistic encoding of placement and removal events in Jahai (Austroasiatic, Malay Peninsula) on the basis of descriptions from a video elicitation task. It outlines the structural characteristics of the descriptions and isolates semantically a set of situation types that find expression in lexical opposites: (1) putting/taking, (2) inserting/extracting, (3) dressing/undressing, and (4) placing/removing one’s body parts. All involve deliberate and controlled placing/removing of a solid Figure object in relation to a Ground which is not a human recipient. However, they differ as to the identity of and physical relationship between Figure and Ground. The data also provide evidence of variation in how semantic roles are mapped onto syntactic constituents: in most situation types, Agent, Figure and Ground associate with particular constituent NPs, but some placement events are described with semantically specialised verbs encoding the Figure and even the Ground.
  • Butterfield, S., & Cutler, A. (1988). Segmentation errors by human listeners: Evidence for a prosodic segmentation strategy. In W. Ainsworth, & J. Holmes (Eds.), Proceedings of SPEECH ’88: Seventh Symposium of the Federation of Acoustic Societies of Europe: Vol. 3 (pp. 827-833). Edinburgh: Institute of Acoustics.
  • Buzon, V., Carbo, L. R., Estruch, S. B., Fletterick, R. J., & Estebanez-Perpina, E. (2012). A conserved surface on the ligand binding domain of nuclear receptors for allosteric control. Molecular and Cellular Endocrinology, 348(2), 394-402. doi:10.1016/j.mce.2011.08.012.

    Abstract

    Nuclear receptors (NRs) form a large superfamily of transcription factors that participate in virtually every key biological process. They control development, fertility, gametogenesis and are misregulated in many cancers. Their enormous functional plasticity as transcription factors relates in part to NR-mediated interactions with hundreds of coregulatory proteins upon ligand (e.g., hormone) binding to their ligand binding domains (LBD), or following covalent modification. Some coregulator association relates to the distinct residues that shape a coactivator binding pocket termed AF-2, a surface groove that primarily determines the preference and specificity of protein–protein interactions. However, the highly conserved AF-2 pocket in the NR superfamily appears to be insufficient to account for NR subtype specificity leading to fine transcriptional modulation in certain settings. Additional protein–protein interaction surfaces, most notably on their LBD, may contribute to modulating NR function. NR coregulators and chaperones, normally much larger than the NR itself, may also bind to such interfaces. In the case of the androgen receptor (AR) LBD surface, structural and functional data highlighted the presence of another site named BF-3, which lies at a distinct but topographically adjacent surface to AF-2. AR BF-3 is a hot spot for mutations involved in prostate cancer and androgen insensitivity syndromes, and some FDA-approved drugs bind at this site. Structural studies suggested an allosteric relationship between AF-2 and BF-3, as occupancy of the latter affected coactivator recruitment to AF-2. Physiological relevant partners of AR BF-3 have not been described as yet. The newly discovered site is highly conserved among the steroid receptors subclass, but is also present in other NRs. Several missense mutations in the BF-3 regions of these human NRs are implicated in pathology and affect their function in vitro. The fact that AR BF-3 pocket is a druggable site evidences its pharmacological potential. Compounds that may affect allosterically NR function by binding to BF-3 open promising avenues to develop type-specific NR modulators.

    Files private

    Request files
  • Byun, K.-S. (2007). Becoming friends with Korean Sign Language. Cheonan: Chungnam Association of the Deaf.
  • Cablitz, G., Ringersma, J., & Kemps-Snijders, M. (2007). Visualizing endangered indigenous languages of French Polynesia with LEXUS. In Proceedings of the 11th International Conference Information Visualization (IV07) (pp. 409-414). IEEE Computer Society.

    Abstract

    This paper reports on the first results of the DOBES project ‘Towards a multimedia dictionary of the Marquesan and Tuamotuan languages of French Polynesia’. Within the framework of this project we are building a digital multimedia encyclopedic lexicon of the endangered Marquesan and Tuamotuan languages using a new tool, LEXUS. LEXUS is a web-based lexicon tool, targeted at linguists involved in language documentation. LEXUS offers the possibility to visualize language. It provides functionalities to include audio, video and still images to the lexical entries of the dictionary, as well as relational linking for the creation of a semantic network knowledge base. Further activities aim at the development of (1) an improved user interface in close cooperation with the speech community and (2) a collaborative workspace functionality which will allow the speech community to actively participate in the creation of lexica.
  • Cameron-Faulkner, T., & Kidd, E. (2007). I'm are what I'm are: The acquisition of first-person singular present BE. Cognitive Linguistics, 18(1), 1-22. doi:10.1515/COG.2007.001.

    Abstract

    The present study investigates the development of am in the speech of one English-speaking child, Scarlett (aged 4;6–5;6). We show that am is infrequent in the speech addressed to children; the acquisition of this form of BE presents a unique insight into the processes underlying language development because children have little evidence regarding its correct use. Scarlett produced a pervasive error where she overextended are to first-person singular contexts where am was required (e.g., I'm are trying, When are I'm finished?). Am gradually emerged in her speech on what appears to be a construction-specific basis. The findings of the study are used in support of a usage-based, constructivisit approach to language development.
  • Carota, F., Moseley, R., & Pulvermüller, F. (2012). Body-part-specific Representations of Semantic Noun Categories. Journal of Cognitive Neuroscience, 24(6), 1492-1509. doi:10.1162/jocn\_a\_00219.

    Abstract

    Word meaning processing in the brain involves ventrolateral temporal cortex, but a semantic contribution of the dorsal stream, especially frontocentral sensorimotor areas, has been controversial. We here examine brain activation during passive reading of object-related nouns from different semantic categories, notably animal, food, and tool words, matched for a range of psycholinguistic features. Results show ventral stream activation in temporal cortex along with category-specific activation patterns in both ventral and dorsal streams, including sensorimotor systems and adjacent pFC. Precentral activation reflected action-related semantic features of the word categories. Cortical regions implicated in mouth and face movements were sparked by food words, and hand area activation was seen for tool words, consistent with the actions implicated by the objects the words are used to speak about. Furthermore, tool words specifically activated the right cerebellum, and food words activated the left orbito-frontal and fusiform areas. We discuss our results in the context of category-specific semantic deficits in the processing of words and concepts, along with previous neuroimaging research, and conclude that specific dorsal and ventral areas in frontocentral and temporal cortex index visual and affective–emotional semantic attributes of object-related nouns and action-related affordances of their referent objects.
  • Carota, F. (2007). Collaborative use of contrastive markers Contextual and co-textual implications. In A. Fetzer (Ed.), Context and Appropriateness: Micro meets macro (pp. 235-260). Amsterdam: Benjamins.

    Abstract

    The study presented in this paper examines the context-dependence and
    dialogue functions of the contrastive markers of Italian ma (but),
    invece (instead), mentre (while) and per (nevertheless) within
    task-oriented dialogues.
    Corpus data evidence their sensitivity to a acognitive interpersonal
    context, conceived as a common ground. Such a cognitive state - shared
    by co-participants through the coordinative process of grounding -
    interacts with the global dialogue structure, which is cognitively
    shaped by ``meta-negotiating{''} and grounding the dialogue topic.
    Locally, the relation between the current dialogue structural units and
    the global dialogue topic is said to be specified by information
    structure, in particular intra-utterance themes.
    It is argued that contrastive markers re-orient the co-participants'
    cognitive states towards grounding ungrounded topical aspects to be
    meta-negotiated. They offer a collaborative context-updating strategy,
    tracking the status of common ground during dialogue topic management.
  • Carroll, M., & Flecken, M. (2012). Language production under time pressure: insights into grammaticalisation of aspect (Dutch, Italian) and language processing in bilinguals (Dutch, German). In B. Ahrenholz (Ed.), Einblicke in die Zweitspracherwerbsforschung und Ihre methodischen Verfahren (pp. 49-76). Berlin: De Gruyter.
  • Carroll, M., Lambert, M., Weimar, K., Flecken, M., & von Stutterheim, C. (2012). Tracing trajectories: Motion event construal by advanced L2 French-English and L2 French-German speakers. Language Interaction and Acquisition, 3(2), 202-230. doi:10.1075/lia.3.2.03car.

    Abstract

    Although the typological contrast between Romance and Germanic languages as verb-framed versus satellite-framed (Talmy 1985) forms the background for many empirical studies on L2 acquisition, the inconclusive picture to date calls for more differentiated, fine-grained analyses. The present study goes beyond explanations based on this typological contrast and takes into account the sources from which spatial concepts are mainly derived in order to shape the trajectory traced by the entity in motion when moving through space: the entity in V-languages versus features of the ground in S-languages. It investigates why advanced French learners of English and German have difficulty acquiring the use of spatial concepts typical of the L2s to shape the trajectory, although relevant concepts can be expressed in their L1. The analysis compares motion event descriptions, based on the same sets of video clips, of L1 speakers of the three languages to L1 French-L2 English and L1 French-L2 German speakers, showing that the learners do not fully acquire the use of L2-specific spatial concepts. We argue that encoded concepts derived from the entity in motion vs. the ground lead to a focus on different aspects of motion events, in accordance with their compatibility with these sources, and are difficult to restructure in L2 acquisition.
  • Casasanto, D., & Henetz, T. (2012). Handedness shapes children’s abstract concepts. Cognitive Science, 36, 359-372. doi:10.1111/j.1551-6709.2011.01199.x.

    Abstract

    Can children’s handedness influence how they represent abstract concepts like kindness and intelligence? Here we show that from an early age, right-handers associate rightward space more strongly with positive ideas and leftward space with negative ideas, but the opposite is true for left-handers. In one experiment, children indicated where on a diagram a preferred toy and a dispreferred toy should go. Right-handers tended to assign the preferred toy to a box on the right and the dispreferred toy to a box on the left. Left-handers showed the opposite pattern. In a second experiment, children judged which of two cartoon animals looked smarter (or dumber) or nicer (or meaner). Right-handers attributed more positive qualities to animals on the right, but left-handers to animals on the left. These contrasting associations between space and valence cannot be explained by exposure to language or cultural conventions, which consistently link right with good. Rather, right- and left-handers implicitly associated positive valence more strongly with the side of space on which they can act more fluently with their dominant hands. Results support the body-specificity hypothesis (Casasanto, 2009), showing that children with different kinds of bodies think differently in corresponding ways.
  • Casasanto, D. (2012). Whorfian hypothesis. In J. L. Jackson, Jr. (Ed.), Oxford Bibliographies Online: Anthropology. Oxford: Oxford University Press. doi:10.1093/OBO/9780199766567-0058.

    Abstract

    Introduction
    The Sapir-Whorf hypothesis (a.k.a. the Whorfian hypothesis) concerns the relationship between language and thought. Neither the anthropological linguist Edward Sapir (b. 1884–d. 1939) nor his student Benjamin Whorf (b. 1897–d. 1941) ever formally stated any single hypothesis about the influence of language on nonlinguistic cognition and perception. On the basis of their writings, however, two proposals emerged, generating decades of controversy among anthropologists, linguists, philosophers, and psychologists. According to the more radical proposal, linguistic determinism, the languages that people speak rigidly determine the way they perceive and understand the world. On the more moderate proposal, linguistic relativity, habits of using language influence habits of thinking. As a result, people who speak different languages think differently in predictable ways. During the latter half of the 20th century, the Sapir-Whorf hypothesis was widely regarded as false. Around the turn of the 21st century, however, experimental evidence reopened debate about the extent to which language shapes nonlinguistic cognition and perception. Scientific tests of linguistic determinism and linguistic relativity help to clarify what is universal in the human mind and what depends on the particulars of people’s physical and social experience.
    General Overviews and Foundational Texts

    Writing on the relationship between language and thought predates Sapir and Whorf, and extends beyond the academy. The 19th-century German philosopher Wilhelm von Humboldt argued that language constrains people’s worldview, foreshadowing the idea of linguistic determinism later articulated in Sapir 1929 and Whorf 1956 (Humboldt 1988). The intuition that language radically determines thought has been explored in works of fiction such as Orwell’s dystopian fantasy 1984 (Orwell 1949). Although there is little empirical support for radical linguistic determinism, more moderate forms of linguistic relativity continue to generate influential research, reviewed from an anthropologist’s perspective in Lucy 1997, from a psychologist’s perspective in Hunt and Agnoli 1991, and discussed from multidisciplinary perspectives in Gumperz and Levinson 1996 and Gentner and Goldin-Meadow 2003.
  • Casillas, M., & Frank, M. C. (2012). Cues to turn boundary prediction in adults and preschoolers. In S. Brown-Schmidt, J. Ginzburg, & S. Larsson (Eds.), Proceedings of SemDial 2012 (SeineDial): The 16th Workshop on the Semantics and Pragmatics of Dialogue (pp. 61-69). Paris: Université Paris-Diderot.

    Abstract

    Conversational turns often proceed with very brief pauses between speakers. In order to maintain “no gap, no overlap” turntaking, we must be able to anticipate when an ongoing utterance will end, tracking the current speaker for upcoming points of potential floor exchange. The precise set of cues that listeners use for turn-end boundary anticipation is not yet established. We used an eyetracking paradigm to measure adults’ and children’s online turn processing as they watched videos of conversations in their native language (English) and a range of other languages they did not speak. Both adults and children anticipated speaker transitions effectively. In addition, we observed evidence of turn-boundary anticipation for questions even in languages that were unknown to participants, suggesting that listeners’ success in turn-end anticipation does not rely solely on lexical information.
  • Catani, M., Dell'Acqua, F., Bizzi, A., Forkel, S. J., Williams, S. C., Simmons, A., Murphy, D. G., & Thiebaut de Schotten, M. (2012). Beyond cortical localization in clinico-anatomical correlation. Cortex, 48(10), 1262-1287. doi:10.1016/j.cortex.2012.07.001.

    Abstract

    Last year was the 150th anniversary of Paul Broca's landmark case report on speech disorder that paved the way for subsequent studies of cortical localization of higher cognitive functions. However, many complex functions rely on the activity of distributed networks rather than single cortical areas. Hence, it is important to understand how brain regions are linked within large-scale networks and to map lesions onto connecting white matter tracts. To facilitate this network approach we provide a synopsis of classical neurological syndromes associated with frontal, parietal, occipital, temporal and limbic lesions. A review of tractography studies in a variety of neuropsychiatric disorders is also included. The synopsis is accompanied by a new atlas of the human white matter connections based on diffusion tensor tractography freely downloadable on http://www.natbrainlab.com. Clinicians can use the maps to accurately identify the tract affected by lesions visible on conventional CT or MRI. The atlas will also assist researchers to interpret their group analysis results. We hope that the synopsis and the atlas by allowing a precise localization of white matter lesions and associated symptoms will facilitate future work on the functional correlates of human neural networks as derived from the study of clinical populations. Our goal is to stimulate clinicians to develop a critical approach to clinico-anatomical correlative studies and broaden their view of clinical anatomy beyond the cortical surface in order to encompass the dysfunction related to connecting pathways.

    Additional information

    supplementary file
  • Chang, F., Janciauskas, M., & Fitz, H. (2012). Language adaptation and learning: Getting explicit about implicit learning. Language and Linguistics Compass, 6, 259-278. doi:10.1002/lnc3.337.

    Abstract

    Linguistic adaptation is a phenomenon where language representations change in response to linguistic input. Adaptation can occur on multiple linguistic levels such as phonology (tuning of phonotactic constraints), words (repetition priming), and syntax (structural priming). The persistent nature of these adaptations suggests that they may be a form of implicit learning and connectionist models have been developed which instantiate this hypothesis. Research on implicit learning, however, has also produced evidence that explicit chunk knowledge is involved in the performance of these tasks. In this review, we examine how these interacting implicit and explicit processes may change our understanding of language learning and processing.
  • Chen, A., Den Os, E., & De Ruiter, J. P. (2007). Pitch accent type matters for online processing of information status: Evidence from natural and synthetic speech. The Linguistic Review, 24(2), 317-344. doi:10.1515/TLR.2007.012.

    Abstract

    Adopting an eyetracking paradigm, we investigated the role of H*L, L*HL, L*H, H*LH, and deaccentuation at the intonational phrase-final position in online processing of information status in British English in natural speech. The role of H*L, L*H and deaccentuation was also examined in diphonesynthetic speech. It was found that H*L and L*HL create a strong bias towards newness, whereas L*H, like deaccentuation, creates a strong bias towards givenness. In synthetic speech, the same effect was found for H*L, L*H and deaccentuation, but it was delayed. The delay may not be caused entirely by the difference in the segmental quality between synthetic and natural speech. The pitch accent H*LH, however, appears to bias participants' interpretation to the target word, independent of its information status. This finding was explained in the light of the effect of durational information at the segmental level on word recognition.
  • Chen, H.-C., & Cutler, A. (1997). Auditory priming in spoken and printed word recognition. In H.-C. Chen (Ed.), Cognitive processing of Chinese and related Asian languages (pp. 77-81). Hong Kong: Chinese University Press.
  • Chen, X. S., Rozhdestvensky, T. S., Collins, L. J., Schmitz, J., & Penny, D. (2007). Combined experimental and computational approach to identify non-protein-coding RNAs in the deep-branching eukaryote Giardia intestinalis. Nucleic Acids Research, 35, 4619-4628. doi:10.1093/nar/gkm474.

    Abstract

    Non-protein-coding RNAs represent a large proportion of transcribed sequences in eukaryotes. These RNAs often function in large RNA–protein complexes, which are catalysts in various RNA-processing pathways. As RNA processing has become an increasingly important area of research, numerous non-messenger RNAs have been uncovered in all the model eukaryotic organisms. However, knowledge on RNA processing in deep-branching eukaryotes is still limited. This study focuses on the identification of non-protein-coding RNAs from the diplomonad parasite Giardia intestinalis, showing that a combined experimental and computational search strategy is a fast method of screening reduced or compact genomes. The analysis of our Giardia cDNA library has uncovered 31 novel candidates, including C/D-box and H/ACA box snoRNAs, as well as an unusual transcript of RNase P, and double-stranded RNAs. Subsequent computational analysis has revealed additional putative C/D-box snoRNAs. Our results will lead towards a future understanding of RNA metabolism in the deep-branching eukaryote Giardia, as more ncRNAs are characterized.
  • Chen, X. S., & Brown, C. M. (2012). Computational identification of new structured cis-regulatory elements in the 3'-untranslated region of human protein coding genes. Nucleic Acids Research, 40, 8862-8873. doi:10.1093/nar/gks684.

    Abstract

    Messenger ribonucleic acids (RNAs) contain a large number of cis-regulatory RNA elements that function in many types of post-transcriptional regulation. These cis-regulatory elements are often characterized by conserved structures and/or sequences. Although some classes are well known, given the wide range of RNA-interacting proteins in eukaryotes, it is likely that many new classes of cis-regulatory elements are yet to be discovered. An approach to this is to use computational methods that have the advantage of analysing genomic data, particularly comparative data on a large scale. In this study, a set of structural discovery algorithms was applied followed by support vector machine (SVM) classification. We trained a new classification model (CisRNA-SVM) on a set of known structured cis-regulatory elements from 3′-untranslated regions (UTRs) and successfully distinguished these and groups of cis-regulatory elements not been strained on from control genomic and shuffled sequences. The new method outperformed previous methods in classification of cis-regulatory RNA elements. This model was then used to predict new elements from cross-species conserved regions of human 3′-UTRs. Clustering of these elements identified new classes of potential cis-regulatory elements. The model, training and testing sets and novel human predictions are available at: http://mRNA.otago.ac.nz/CisRNA-SVM.
  • Chen, J. (2012). “She from bookshelf take-descend-come the box”: Encoding and categorizing placement events in Mandarin. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 37-54). Amsterdam: Benjamins.

    Abstract

    This paper investigates the lexical semantics of placement verbs in Mandarin. The majority of Mandarin placement verbs are directional verb compounds (e.g., na2-xia4-lai2 ‘take-descend-come’). They are composed of two or three verbs in a fixed order, each encoding certain semantic components of placement events. The first verb usually conveys object manipulation and the second and the third verbs indicate the Path of motion, including Deixis. The first verb, typically encoding object manipulation, can be semantically general or specific: two general verbs, fang4 ‘put’ and na2 ‘take’, have large but constrained extensional categories, and a number of specific verbs are used based on the Manner of manipulation of the Figure object, the relationship between and the physical properties of Figure and Ground, intentionality of the Agent, and the type of instrument.
  • Chen, J. (2007). 'He cut-break the rope': Encoding and categorizing cutting and breaking events in Mandarin. Cognitive Linguistics, 18(2), 273-285. doi:10.1515/COG.2007.015.

    Abstract

    Abstract Mandarin categorizes cutting and breaking events on the basis of fine semantic distinctions in the causal action and the caused result. I demonstrate the semantics of Mandarin C&B verbs from the perspective of event encoding and categorization as well as argument structure alternations. Three semantically different types of predicates can be identified: verbs denoting the C&B action subevent, verbs encoding the C&B result subevent, and resultative verb compounds (RVC) that encode both the action and the result subevents. The first verb of an RVC is basically dyadic, whereas the second is monadic. RVCs as a whole are also basically dyadic, and do not undergo detransitivization.
  • Chen, A., & Fikkert, P. (2007). Intonation of early two-word utterances in Dutch. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 315-320). Dudweiler: Pirrot.

    Abstract

    We analysed intonation contours of two-word utterances from three monolingual Dutch children aged between 1;4 and 2;1 in the autosegmentalmetrical framework. Our data show that children have mastered the inventory of the boundary tones and nuclear pitch accent types (except for L*HL and L*!HL) at the 160-word level, and the set of nondownstepped pre-nuclear pitch accents (except for L*) at the 230-word level, contra previous claims on the mastery of adult-like intonation contours before or at the onset of first words. Further, there is evidence that intonational development is correlated with an increase in vocabulary size. Moreover, we found that children show a preference for falling contours, as predicted on the basis of universal production mechanisms. In addition, the utterances are mostly spoken with both words accented independent of semantic relations expressed and information status of each word across developmental stages, contra prior work. Our study suggests a number of topics for further research.
  • Chen, A. (2007). Intonational realisation of topic and focus by Dutch-acquiring 4- to 5-year-olds. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1553-1556). Dudweiler: Pirott.

    Abstract

    This study examined how Dutch-acquiring 4- to 5-year-olds use different pitch accent types and deaccentuation to mark topic and focus at the sentence level and how they differ from adults. The topic and focus were non-contrastive and realised as full noun phrases. It was found that children realise topic and focus similarly frequently with H*L, whereas adults use H*L noticeably more frequently in focus than in topic in sentence-initial position and nearly only in focus in sentence-final position. Further, children frequently realise the topic with an accent, whereas adults mostly deaccent the sentence-final topic and use H*L and H* to realise the sentence-initial topic because of rhythmic motivation. These results show that 4- and 5-year-olds have not acquired H*L as the typical focus accent and deaccentuation as the typical topic intonation yet. Possibly, frequent use of H*L in sentence-initial topic in adult Dutch has made it difficult to extract the functions of H*L and deaccentuation from the input.
  • Chen, A. (2007). Language-specificity in the perception of continuation intonation. In C. Gussenhoven, & T. Riad (Eds.), Tones and tunes II: Phonetic and behavioural studies in word and sentence prosody (pp. 107-142). Berlin: Mouton de Gruyter.

    Abstract

    This paper addressed the question of how British English, German and Dutch listeners differ in their perception of continuation intonation both at the phonological level (Experiment 1) and at the level of phonetic implementation (Experiment 2). In Experiment 1, preference scores of pitch contours to signal continuation at the clause-boundary were obtained from these listener groups. It was found that among contours with H%, British English listeners had a strong preference for H*L H%, as predicted. Unexpectedly, British English listeners rated H* H% noticeably more favourably than L*H H%; Dutch listeners largely rated H* H% more favourably than H*L H% and L*H H%; German listeners rated these contours similarly and seemed to have a slight preference for H*L H%. In Experiment 2, the degree to which a final rise was perceived to express continuation was established for each listener group in a made-up language. It was found that although all listener groups associated a higher end pitch with a higher degree of continuation likelihood, the perceived meaning difference for a given interval of end pitch heights varied with the contour shape of the utterance final syllable. When it was comparable to H* H%, British English and Dutch listeners perceived a larger meaning difference than German listeners; when it was comparable to H*L H%, British English listeners perceived a larger difference than German and Dutch listeners. This shows that language-specificity in continuation intonation at the phonological level affects the perception of continuation intonation at the phonetic level.
  • Chen, A. (2012). Shaping the intonation of Wh-questions: Information structure and beyond. In J. P. de Ruiter (Ed.), Questions: Formal, functional and interactional perspectives (pp. 146-164). New York: Cambridge University Press.
  • Chen, A. (2012). The prosodic investigation of information structure. In M. Krifka, & R. Musan (Eds.), The expression of information structure (pp. 249-286). Berlin: de Gruyter.
  • Cho, T., McQueen, J. M., & Cox, E. A. (2007). Prosodically driven phonetic detail in speech processing: The case of domain-initial strengthening in English. Journal of Phonetics, 35(2), 210-243. doi:10.1016/j.wocn.2006.03.003.

    Abstract

    We explore the role of the acoustic consequences of domain-initial strengthening in spoken-word recognition. In two cross-modal identity-priming experiments, listeners heard sentences and made lexical decisions to visual targets, presented at the onset of the second word in two-word sequences containing lexical ambiguities (e.g., bus tickets, with the competitor bust). These sequences contained Intonational Phrase (IP) or Prosodic Word (Wd) boundaries, and the second word's initial Consonant and Vowel (CV, e.g., [tI]) was spliced from another token of the sequence in IP- or Wd-initial position. Acoustic analyses showed that IP-initial consonants were articulated more strongly than Wd-initial consonants. In Experiment 1, related targets were post-boundary words (e.g., tickets). No strengthening effect was observed (i.e., identity priming effects did not vary across splicing conditions). In Experiment 2, related targets were pre-boundary words (e.g., bus). There was a strengthening effect (stronger priming when the post-boundary CVs were spliced from IP-initial than from Wd-initial position), but only in Wd-boundary contexts. These were the conditions where phonetic detail associated with domain-initial strengthening could assist listeners most in lexical disambiguation. We discuss how speakers may strengthen domain-initial segments during production and how listeners may use the resulting acoustic correlates of prosodic strengthening during word recognition.
  • Christoffels, I. K., Formisano, E., & Schiller, N. O. (2007). The neural correlates of verbal feedback processing: An fMRI study employing overt speech. Human Brain Mapping, 28(9), 868-879. doi:10.1002/hbm.20315.

    Abstract

    Speakers use external auditory feedback to monitor their own speech. Feedback distortion has been found to increase activity in the superior temporal areas. Using fMRI, the present study investigates the neural correlates of processing verbal feedback without distortion. In a blocked design, the following conditions were presented: (1) overt picture-naming, (2) overt picture-naming while pink noise was presented to mask external feedback, (3) covert picture-naming, (4) listening to the picture names (previously recorded from participants' own voices), and (5) listening to pink noise. The results show that auditory feedback processing involves a network of different areas related to general performance monitoring and speech-motor control. These include the cingulate cortex and the bilateral insula, supplementary motor area, bilateral motor areas, cerebellum, thalamus and basal ganglia. Our findings suggest that the anterior cingulate cortex, which is often implicated in error-processing and conflict-monitoring, is also engaged in ongoing speech monitoring. Furthermore, in the superior temporal gyrus, we found a reduced response to speaking under normal feedback conditions. This finding is interpreted in the framework of a forward model according to which, during speech production, the sensory consequence of the speech-motor act is predicted to attenuate the sensitivity of the auditory cortex. Hum Brain Mapp 2007. © 2007 Wiley-Liss, Inc.
  • Christoffels, I. K., Firk, C., & Schiller, N. O. (2007). Bilingual language control: An event-related brain potential study. Brain Research, 1147, 192-208. doi:10.1016/j.brainres.2007.01.137.

    Abstract

    This study addressed how bilingual speakers switch between their first and second language when speaking. Event-related brain potentials (ERPs) and naming latencies were measured while unbalanced German (L1)-Dutch (L2) speakers performed a picture-naming task. Participants named pictures either in their L1 or in their L2 (blocked language conditions), or participants switched between their first and second language unpredictably (mixed language condition). Furthermore, form similarity between translation equivalents (cognate status) was manipulated. A cognate facilitation effect was found for L1 and L2 indicating phonological activation of the non-response language in blocked and mixed language conditions. The ERP data also revealed small but reliable effects of cognate status. Language switching resulted in equal switching costs for both languages and was associated with a modulation in the ERP waveforms (time windows 275-375 ms and 375-475 ms). Mixed language context affected especially the L1, both in ERPs and in latencies, which became slower in L1 than L2. It is suggested that sustained and transient components of language control should be distinguished. Results are discussed in relation to current theories of bilingual language processing.
  • Chu, M., & Kita, S. (2012). The role of spontaneous gestures in spatial problem solving. In E. Efthimiou, G. Kouroupetroglou, & S.-E. Fotinea (Eds.), Gesture and sign language in human-computer interaction and embodied communication: 9th International Gesture Workshop, GW 2011, Athens, Greece, May 25-27, 2011, revised selected papers (pp. 57-68). Heidelberg: Springer.

    Abstract

    When solving spatial problems, people often spontaneously produce hand gestures. Recent research has shown that our knowledge is shaped by the interaction between our body and the environment. In this article, we review and discuss evidence on: 1) how spontaneous gesture can reveal the development of problem solving strategies when people solve spatial problems; 2) whether producing gestures can enhance spatial problem solving performance. We argue that when solving novel spatial problems, adults go through deagentivization and internalization processes, which are analogous to young children’s cognitive development processes. Furthermore, gesture enhances spatial problem solving performance. The beneficial effect of gesturing can be extended to non-gesturing trials and can be generalized to a different spatial task that shares similar spatial transformation processes.
  • Chu, M., & Kita, S. (2012). The nature of the beneficial role of spontaneous gesture in spatial problem solving [Abstract]. Cognitive Processing; Special Issue "ICSC 2012, the 5th International Conference on Spatial Cognition: Space and Embodied Cognition". Oral Presentations, 13(Suppl. 1), S39.

    Abstract

    Spontaneous gestures play an important role in spatial problem solving. We investigated the functional role and underlying mechanism of spontaneous gestures in spatial problem solving. In Experiment 1, 132 participants were required to solve a mental rotation task (see Figure 1) without speaking. Participants gestured more frequently in difficult trials than in easy trials. In Experiment 2, 66 new participants were given two identical sets of mental rotation tasks problems, as the one used in experiment 1. Participants who were encouraged to gesture in the first set of mental rotation task problemssolved more problems correctly than those who were allowed to gesture or those who were prohibited from gesturing both in the first set and in the second set in which all participants were prohibited from gesturing. The gestures produced by the gestureencouraged group and the gesture-allowed group were not qualitatively different. In Experiment 3, 32 new participants were first given a set of mental rotation problems and then a second set of nongesturing paper folding problems. The gesture-encouraged group solved more problems correctly in the first set of mental rotation problems and the second set of non-gesturing paper folding problems. We concluded that gesture improves spatial problem solving. Furthermore, gesture has a lasting beneficial effect even when gesture is not available and the beneficial effect is problem-general.We suggested that gesture enhances spatial problem solving by provide a rich sensori-motor representation of the physical world and pick up information that is less readily available to visuo-spatial processes.
  • Clark, E. V., & Bowerman, M. (1986). On the acquisition of final voiced stops. In J. A. Fishman (Ed.), The Fergusonian impact: in honor of Charles A. Ferguson on the occasion of his 65th birthday. Volume 1: From phonology to society (pp. 51-68). Berlin: Mouton de Gruyter.
  • Cohen, E. (2012). [Review of the book Searching for Africa in Brazil: Power and Tradition in Candomblé by Stefania Capone]. Critique of Anthropology, 32, 217-218. doi:10.1177/0308275X12439961.
  • Cohen, E. (2012). The evolution of tag-based cooperation in humans: The case for accent. Current Anthropology, 53, 588-616. doi:10.1086/667654.

    Abstract

    Recent game-theoretic simulation and analytical models have demonstrated that cooperative strategies mediated by indicators of cooperative potential, or “tags,” can invade, spread, and resist invasion by noncooperators across a range of population-structure and cost-benefit scenarios. The plausibility of these models is potentially relevant for human evolutionary accounts insofar as humans possess some phenotypic trait that could serve as a reliable tag. Linguistic markers, such as accent and dialect, have frequently been either cursorily defended or promptly dismissed as satisfying the criteria of a reliable and evolutionarily viable tag. This paper integrates evidence from a range of disciplines to develop and assess the claim that speech accent mediated the evolution of tag-based cooperation in humans. Existing evidence warrants the preliminary conclusion that accent markers meet the demands of an evolutionarily viable tag and potentially afforded a cost-effective solution to the challenges of maintaining viable cooperative relationships in diffuse, regional social networks.
  • Collins, J. (2012). The evolution of the Greenbergian word order correlations. In T. C. Scott-Phillips, M. Tamariz, E. A. Cartmill, & J. R. Hurford (Eds.), The evolution of language. Proceedings of the 9th International Conference (EVOLANG9) (pp. 72-79). Singapore: World Scientific.
  • Colzato, L. S., Zech, H., Hommel, B., Verdonschot, R. G., Van den Wildenberg, W. P. M., & Hsieh, S. (2012). Loving-kindness brings loving-kindness: The impact of Buddhism on cognitive self-other integration. Psychonomic Bulletin & Review, 19(3), 541-545. doi:10.3758/s13423-012-0241-y.

    Abstract

    Common wisdom has it that Buddhism enhances compassion and self-other integration. We put this assumption to empirical test by comparing practicing Taiwanese Buddhists with well-matched atheists. Buddhists showed more evidence of self-other integration in the social Simon task, which assesses the degree to which people co-represent the actions of a coactor. This suggests that self-other integration and task co-representation vary as a function of religious practice.
  • Connell, L., Cai, Z. G., & Holler, J. (2012). Do you see what I'm singing? Visuospatial movement biases pitch perception. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 252-257). Austin, TX: Cognitive Science Society.

    Abstract

    The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.
  • Cox, S., Rösler, D., & Skiba, R. (1989). A tailor-made database for language teaching material. Literary & Linguistic Computing, 4(4), 260-264.
  • Crago, M. B., & Allen, S. E. M. (1997). Linguistic and cultural aspects of simplicity and complexity in Inuktitut child directed speech. In E. Hughes, M. Hughes, & A. Greenhill (Eds.), Proceedings of the 21st annual Boston University Conference on Language Development (pp. 91-102).
  • Crago, M. B., Allen, S. E. M., & Hough-Eyamie, W. P. (1997). Exploring innateness through cultural and linguistic variation. In M. Gopnik (Ed.), The inheritance and innateness of grammars (pp. 70-90). New York City, NY, USA: Oxford University Press, Inc.
  • Crasborn, O., & Windhouwer, M. (2012). ISOcat data categories for signed language resources. In E. Efthimiou, G. Kouroupetroglou, & S.-E. Fotinea (Eds.), Gesture and sign language in human-computer interaction and embodied communication: 9th International Gesture Workshop, GW 2011, Athens, Greece, May 25-27, 2011, revised selected papers (pp. 118-128). Heidelberg: Springer.

    Abstract

    As the creation of signed language resources is gaining speed world-wide, the need for standards in this field becomes more acute. This paper discusses the state of the field of signed language resources, their metadata descriptions, and annotations that are typically made. It then describes the role that ISOcat may play in this process and how it can stimulate standardisation without imposing standards. Finally, it makes some initial proposals for the thematic domain ‘sign language’ that was introduced in 2011.
  • Cristia, A., & Peperkamp, S. (2012). Generalizing without encoding specifics: Infants infer phonotactic patterns on sound classes. In A. K. Biller, E. Y. Chung, & A. E. Kimball (Eds.), Proceedings of the 36th Annual Boston University Conference on Language Development (BUCLD 36) (pp. 126-138). Somerville, Mass.: Cascadilla Press.

    Abstract

    publication expected April 2012
  • Cristia, A., Seidl, A., Vaughn, C., Schmale, R., Bradlow, A., & Floccia, C. (2012). Linguistic processing of accented speech across the lifespan. Frontiers in Psychology, 3, 479. doi:10.3389/fpsyg.2012.00479.

    Abstract

    In most of the world, people have regular exposure to multiple accents. Therefore, learning to quickly process accented speech is a prerequisite to successful communication. In this paper, we examine work on the perception of accented speech across the lifespan, from early infancy to late adulthood. Unfamiliar accents initially impair linguistic processing by infants, children, younger adults, and older adults, but listeners of all ages come to adapt to accented speech. Emergent research also goes beyond these perceptual abilities, by assessing links with production and the relative contributions of linguistic knowledge and general cognitive skills. We conclude by underlining points of convergence across ages, and the gaps left to face in future work.
  • Cronin, K. A. (2012). Cognitive aspects of prosocial behavior in nonhuman primates. In N. M. Seel (Ed.), Encyclopedia of the sciences of learning. Part 3 (2nd ed., pp. 581-583). Berlin: Springer.

    Abstract

    Definition Prosocial behavior is any behavior performed by one individual that results in a benefit for another individual. Prosocial motivations, prosocial preferences, or other-regarding preferences refer to the psychological predisposition to behave in the best interest of another individual. A behavior need not be costly to the actor to be considered prosocial, thus the concept is distinct from altruistic behavior which requires that the actor incurs some cost when providing a benefit to another.
  • Cronin, K. A. (2012). Prosocial behaviour in animals: The influence of social relationships, communication and rewards. Animal Behaviour, 84, 1085-1093. doi:10.1016/j.anbehav.2012.08.009.

    Abstract

    Researchers have struggled to obtain a clear account of the evolution of prosocial behaviour despite a great deal of recent effort. The aim of this review is to take a brief step back from addressing the question of evolutionary origins of prosocial behaviour in order to identify contextual factors that are contributing to variation in the expression of prosocial behaviour and hindering progress towards identifying phylogenetic patterns. Most available data come from the Primate Order, and the choice of contextual factors to consider was informed by theory and practice, including the nature of the relationship between the potential donor and recipient, the communicative behaviour of the recipients, and features of the prosocial task including whether rewards are visible and whether the prosocial choice creates an inequity between actors. Conclusions are drawn about the facilitating or inhibiting impact of each of these factors on the expression of prosocial behaviour, and areas for future research are highlighted. Acknowledging the impact of these contextual features on the expression of prosocial behaviours should stimulate new research into the proximate mechanisms that drive these effects, yield experimental designs that better control for potential influences on prosocial expression, and ultimately allow progress towards reconstructing the evolutionary origins of prosocial behaviour.
  • Cronin, K. A., & Sanchez, A. (2012). Social dynamics and cooperation: The case of nonhuman primates and its implications for human behavior. Advances in complex systems, 15, 1250066. doi:10.1142/S021952591250066X.

    Abstract

    The social factors that influence cooperation have remained largely uninvestigated but have the potential to explain much of the variation in cooperative behavior observed in the natural world. We show here that certain dimensions of the social environment, namely the size of the social group, the degree of social tolerance expressed, the structure of the dominance hierarchy, and the patterns of dispersal, may influence the emergence and stability of cooperation in predictable ways. Furthermore, the social environment experienced by a species over evolutionary time will have shaped their cognition to provide certain strengths and strategies that are beneficial in their species‟ social world. These cognitive adaptations will in turn impact the likelihood of cooperating in a given social environment. Experiments with one primate species, the cottontop tamarin, illustrate how social dynamics may influence emergence and stability of cooperative behavior in this species. We then take a more general viewpoint and argue that the hypotheses presented here require further experimental work and the addition of quantitative modeling to obtain a better understanding of how social dynamics influence the emergence and stability of cooperative behavior in complex systems. We conclude by pointing out subsequent specific directions for models and experiments that will allow relevant advances in the understanding of the emergence of cooperation.
  • Cutfield, S. (2012). Demonstratives in Dalabon: A language of southwestern Arnhem Land. PhD Thesis, Monash University, Melbourne.

    Abstract

    This study is a comprehensive description of the nominal demonstratives in Dalabon, a severely endangered Gunwinyguan non-Pama-Nyungan language of southwestern Arnhem Land, northern Australia. Demonstratives are attested in the basic vocabulary of every language, yet remain heretofore underdescribed in Australian languages. Traditional definitions of demonstratives as primarily making spatial reference have recently evolved at a great pace, with close analyses of demonstratives-in-use revealing that their use in spatial reference, in narrative discourse, and in interaction is significantly more complex than previously assumed, and that definitions of demonstrative forms are best developed after consideration of their use across these contexts. The present study reinforces findings of complexity in demonstrative use, and the significance of a multidimensional characterization of demonstrative forms. This study is therefore a contribution to the description of Dalabon, to the analysis of demonstratives in Australian languages, and to the theory and typology of demonstratives cross-linguistically. In this study, I present a multi-dimensional analysis of Dalabon demonstratives, using a variety of theoretical frameworks and research tools including descriptive linguistics, lexical-functional grammar, discourse analysis, gesture studies and pragmatics. Using data from personal narratives, improvised interactions and elicitation sessions to investigate the demonstratives, this study takes into account their morphosyntactic distribution, uses in the speech situation, interactional factors, discourse phenomena, concurrent gesture, and uses in personal narratives. I conclude with a unified account of the intenstional and extensional semantics of each form surveyed. The Dalabon demonstrative paradigm divides into two types, those which are spatially-specific and those which are non-spatial. The spatially-specific demonstratives nunda ‘this (in the here-space)’ and djakih ‘that (in the there-space)’ are shown not to encode the location of the referent per se, rather its relative position to dynamic physical and social elements of the speech situation such as the speaker’s engagement area and here-space. Both forms are also used as spatial adverbs to mean ‘here’ and ‘there’ respectively, while only nunda is also used as a temporal adverb ‘now, today’. The spatially-specific demonstratives are limited to situational use in narratives. The non-spatial demonstratives kanh/kanunh ‘that (identifiable)’ and nunh ‘that (unfamiliar, contrastive)’ are used in both the speech situation and personal narratives to index referents as ‘identifiable’ or ‘unfamiliar’ respectively. Their use in the speech situation can conversationally implicate that the referent is distal. The non-spatial demonstratives display the greatest diversity of use in narratives, each specializing for certain uses, yet their wide distribution across discourse usage types can be described on account of their intensional semantics. The findings of greatest typological interest in this study are that speakers’ choice of demonstrative in the speech situation is influenced by multiple simultaneous deictic parameters (including gesture); that oppositions in the Dalabon demonstrative paradigm are not equal, nor exclusively semantic; that the form nunh ‘that (unfamiliar, contrastive)’ is used to index a referent as somewhat inaccessible or unexpected; that the ‘recognitional’ form kanh/kanunh is instead described as ‘identifiable’; and that speakers use demonstratives to index emotional deixis to a referent, or to their addressee.
  • Cutfield, S. (2012). Foreword. Australian Journal of Linguistics, 32(4), 457-458.
  • Cutfield, S. (2012). Principles of Dalabon plant and animal names and classification. In D. Bordulk, N. Dalak, M. Tukumba, L. Bennett, R. Bordro Tingey, M. Katherine, S. Cutfield, M. Pamkal, & G. Wightman (Eds.), Dalabon plants and animals: Aboriginal biocultural knowledge from Southern Arnhem Land, North Australia (pp. 11-12). Palmerston, NT, Australia: Department of Land and Resource Management, Northern Territory.
  • Cutler, A. (1989). Auditory lexical access: Where do we start? In W. Marslen-Wilson (Ed.), Lexical representation and process (pp. 342-356). Cambridge, MA: MIT Press.

    Abstract

    The lexicon, considered as a component of the process of recognizing speech, is a device that accepts a sound image as input and outputs meaning. Lexical access is the process of formulating an appropriate input and mapping it onto an entry in the lexicon's store of sound images matched with their meanings. This chapter addresses the problems of auditory lexical access from continuous speech. The central argument to be proposed is that utterance prosody plays a crucial role in the access process. Continuous listening faces problems that are not present in visual recognition (reading) or in noncontinuous recognition (understanding isolated words). Aspects of utterance prosody offer a solution to these particular problems.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1983). A language-specific comprehension strategy [Letters to Nature]. Nature, 304, 159-160. doi:10.1038/304159a0.

    Abstract

    Infants acquire whatever language is spoken in the environment into which they are born. The mental capability of the newborn child is not biased in any way towards the acquisition of one human language rather than another. Because psychologists who attempt to model the process of language comprehension are interested in the structure of the human mind, rather than in the properties of individual languages, strategies which they incorporate in their models are presumed to be universal, not language-specific. In other words, strategies of comprehension are presumed to be characteristic of the human language processing system, rather than, say, the French, English, or Igbo language processing systems. We report here, however, on a comprehension strategy which appears to be used by native speakers of French but not by native speakers of English.
  • Cutler, A., & Otake, T. (1997). Contrastive studies of spoken-language processing. Journal of Phonetic Society of Japan, 1, 4-13.
  • Cutler, A. (2012). Eentaalpsychologie is geen taalpsychologie: Part II. [Valedictory lecture Radboud University]. Nijmegen: Radboud University.

    Abstract

    Rede uitgesproken bij het afscheid als hoogleraar Vergelijkende taalpsychologie aan de Faculteit der Sociale Wetenschappen van de Radboud Universiteit Nijmegen op donderdag 20 september 2012
  • Cutler, A. (1980). Errors of stress and intonation. In V. A. Fromkin (Ed.), Errors in linguistic performance: Slips of the tongue, ear, pen and hand (pp. 67-80). New York: Academic Press.
  • Cutler, A., & Davis, C. (2012). An orthographic effect in phoneme processing, and its limitations. Frontiers in Psychology, 3, 18. doi:10.3389/fpsyg.2012.00018.

    Abstract

    To examine whether lexically stored knowledge about spelling influences phoneme evaluation, we conducted three experiments with a low-level phonetic judgement task: phoneme goodness rating. In each experiment, listeners heard phonetic tokens varying along a continuum centred on /s/, occurring finally in isolated word or nonword tokens. An effect of spelling appeared in Experiment 1: Native English speakers’ goodness ratings for the best /s/ tokens were significantly higher in words spelled with S (e.g., bless) than in words spelled with C (e.g., voice). No such difference appeared when nonnative speakers rated the same materials in Experiment 2, indicating that the difference could not be due to acoustic characteristics of the S- versus C-words. In Experiment 3, nonwords with lexical neighbours consistently spelled with S (e.g., pless) versus with C (e.g., floice) failed to elicit orthographic neighbourhood effects; no significant difference appeared in native English speakers’ ratings for the S-consistent versus the C-consistent sets. Obligatory influence of lexical knowledge on phonemic processing would have predicted such neighbourhood effects; the findings are thus better accommodated by models in which phonemic decisions draw strategically upon lexical information.
  • Cutler, A., Wales, R., Cooper, N., & Janssen, J. (2007). Dutch listeners' use of suprasegmental cues to English stress. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetics Sciences (ICPhS 2007) (pp. 1913-1916). Dudweiler: Pirrot.

    Abstract

    Dutch listeners outperform native listeners in identifying syllable stress in English. This is because lexical stress is more useful in recognition of spoken words of Dutch than of English, so that Dutch listeners pay greater attention to stress in general. We examined Dutch listeners’ use of the acoustic correlates of English stress. Primary- and secondary-stressed syllables differ significantly on acoustic measures, and some differences, in F0 especially, correlate with data of earlier listening experiments. The correlations found in the Dutch responses were not paralleled in data from native listeners. Thus the acoustic cues which distinguish English primary versus secondary stress are better exploited by Dutch than by native listeners.
  • Cutler, A., & Weber, A. (2007). Listening experience and phonetic-to-lexical mapping in L2. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 43-48). Dudweiler: Pirrot.

    Abstract

    In contrast to initial L1 vocabularies, which of necessity depend largely on heard exemplars, L2 vocabulary construction can draw on a variety of knowledge sources. This can lead to richer stored knowledge about the phonology of the L2 than the listener's prelexical phonetic processing capacity can support, and thus to mismatch between the level of detail required for accurate lexical mapping and the level of detail delivered by the prelexical processor. Experiments on spoken word recognition in L2 have shown that phonetic contrasts which are not reliably perceived are represented in the lexicon nonetheless. This lexical representation of contrast must be based on abstract knowledge, not on veridical representation of heard exemplars. New experiments confirm that provision of abstract knowledge (in the form of spelling) can induce lexical representation of a contrast which is not reliably perceived; but also that experience (in the form of frequency of occurrence) modulates the mismatch of phonetic and lexical processing. We conclude that a correct account of word recognition in L2 (as indeed in L1) requires consideration of both abstract and episodic information.
  • Cutler, A., Cooke, M., Garcia-Lecumberri, M. L., & Pasveer, D. (2007). L2 consonant identification in noise: Cross-language comparisons. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1585-1588). Adelaide: Causal productions.

    Abstract

    The difficulty of listening to speech in noise is exacerbated when the speech is in the listener’s L2 rather than L1. In this study, Spanish and Dutch users of English as an L2 identified American English consonants in a constant intervocalic context. Their performance was compared with that of L1 (British English) listeners, under quiet conditions and when the speech was masked by speech from another talker or by noise. Masking affected performance more for the Spanish listeners than for the L1 listeners, but not for the Dutch listeners, whose performance was worse than the L1 case to about the same degree in all conditions. There were, however,large differences in the pattern of results across individual consonants, which were consistent with differences in how consonants are identified in the respective L1s.
  • Cutler, A. (1986). Forbear is a homophone: Lexical prosody does not constrain lexical access. Language and Speech, 29, 201-220.

    Abstract

    Because stress can occur in any position within an Eglish word, lexical prosody could serve as a minimal distinguishing feature between pairs of words. However, most pairs of English words with stress pattern opposition also differ vocalically: OBject an obJECT, CONtent and content have different vowels in their first syllables an well as different stress patters. To test whether prosodic information is made use in auditory word recognition independently of segmental phonetic information, it is necessary to examine pairs like FORbear – forBEAR of TRUSty – trusTEE, semantically unrelated words which echbit stress pattern opposition but no segmental difference. In a cross-modal priming task, such words produce the priming effects characteristic of homophones, indicating that lexical prosody is not used in the same was as segmental structure to constrain lexical access.
  • Cutler, A. (1982). Idioms: the older the colder. Linguistic Inquiry, 13(2), 317-320. Retrieved from http://www.jstor.org/stable/4178278?origin=JSTOR-pdf.
  • Cutler, A., Howard, D., & Patterson, K. E. (1989). Misplaced stress on prosody: A reply to Black and Byng. Cognitive Neuropsychology, 6, 67-83.

    Abstract

    The recent claim by Black and Byng (1986) that lexical access in reading is subject to prosodic constraints is examined and found to be unsupported. The evidence from impaired reading which Black and Byng report is based on poorly controlled stimulus materials and is inadequately analysed and reported. An alternative explanation of their findings is proposed, and new data are reported for which this alternative explanation can account but their model cannot. Finally, their proposal is shown to be theoretically unmotivated and in conflict with evidence from normal reading.
  • Cutler, A. (1980). La leçon des lapsus. La Recherche, 11(112), 686-692.
  • Cutler, A. (1983). Lexical complexity and sentence processing. In G. B. Flores d'Arcais, & R. J. Jarvella (Eds.), The process of language understanding (pp. 43-79). Chichester, Sussex: Wiley.
  • Cutler, A., & Chen, H.-C. (1997). Lexical tone in Cantonese spoken-word processing. Perception and Psychophysics, 59, 165-179. Retrieved from http://www.psychonomic.org/search/view.cgi?id=778.

    Abstract

    In three experiments, the processing of lexical tone in Cantonese was examined. Cantonese listeners more often accepted a nonword as a word when the only difference between the nonword and the word was in tone, especially when the F0 onset difference between correct and erroneous tone was small. Same–different judgments by these listeners were also slower and less accurate when the only difference between two syllables was in tone, and this was true whether the F0 onset difference between the two tones was large or small. Listeners with no knowledge of Cantonese produced essentially the same same-different judgment pattern as that produced by the native listeners, suggesting that the results display the effects of simple perceptual processing rather than of linguistic knowledge. It is argued that the processing of lexical tone distinctions may be slowed, relative to the processing of segmental distinctions, and that, in speeded-response tasks, tone is thus more likely to be misprocessed than is segmental structure.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1988). Limits on bilingualism [Letters to Nature]. Nature, 340, 229-230. doi:10.1038/340229a0.

    Abstract

    SPEECH, in any language, is continuous; speakers provide few reliable cues to the boundaries of words, phrases, or other meaningful units. To understand speech, listeners must divide the continuous speech stream into portions that correspond to such units. This segmentation process is so basic to human language comprehension that psycholinguists long assumed that all speakers would do it in the same way. In previous research1,2, however, we reported that segmentation routines can be language-specific: speakers of French process spoken words syllable by syllable, but speakers of English do not. French has relatively clear syllable boundaries and syllable-based timing patterns, whereas English has relatively unclear syllable boundaries and stress-based timing; thus syllabic segmentation would work more efficiently in the comprehension of French than in the comprehension of English. Our present study suggests that at this level of language processing, there are limits to bilingualism: a bilingual speaker has one and only one basic language.
  • Cutler, A. (2012). Native listening: Language experience and the recognition of spoken words. Cambridge, MA: MIT Press.

    Abstract

    Understanding speech in our native tongue seems natural and effortless; listening to speech in a nonnative language is a different experience. In this book, Anne Cutler argues that listening to speech is a process of native listening because so much of it is exquisitely tailored to the requirements of the native language. Her cross-linguistic study (drawing on experimental work in languages that range from English and Dutch to Chinese and Japanese) documents what is universal and what is language specific in the way we listen to spoken language. Cutler describes the formidable range of mental tasks we carry out, all at once, with astonishing speed and accuracy, when we listen. These include evaluating probabilities arising from the structure of the native vocabulary, tracking information to locate the boundaries between words, paying attention to the way the words are pronounced, and assessing not only the sounds of speech but prosodic information that spans sequences of sounds. She describes infant speech perception, the consequences of language-specific specialization for listening to other languages, the flexibility and adaptability of listening (to our native languages), and how language-specificity and universality fit together in our language processing system. Drawing on her four decades of work as a psycholinguist, Cutler documents the recent growth in our knowledge about how spoken-word recognition works and the role of language structure in this process. Her book is a significant contribution to a vibrant and rapidly developing field.
  • Cutler, A. (2012). Native listening: The flexibility dimension. Dutch Journal of Applied Linguistics, 1(2), 169-187.

    Abstract

    The way we listen to spoken language is tailored to the specific benefit of native-language speech input. Listening to speech in non-native languages can be significantly hindered by this native bias. Is it possible to determine the degree to which a listener is listening in a native-like manner? Promising indications of how this question may be tackled are provided by new research findings concerning the great flexibility that characterises listening to the L1, in online adjustment of phonetic category boundaries for adaptation across talkers, and in modulation of lexical dynamics for adjustment across listening conditions. This flexibility pays off in many dimensions, including listening in noise, adaptation across dialects, and identification of voices. These findings further illuminate the robustness and flexibility of native listening, and potentially point to ways in which we might begin to assess degrees of ‘native-likeness’ in this skill.
  • Cutler, A., & Butterfield, S. (1989). Natural speech cues to word segmentation under difficult listening conditions. In J. Tubach, & J. Mariani (Eds.), Proceedings of Eurospeech 89: European Conference on Speech Communication and Technology: Vol. 2 (pp. 372-375). Edinburgh: CEP Consultants.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In three experiments, we examined how word boundaries are produced in deliberately clear speech. We found that speakers do indeed attempt to mark word boundaries; moreover, they differentiate between word boundaries in a way which suggests they are sensitive to listener needs. Application of heuristic segmentation strategies makes word boundaries before strong syllables easiest for listeners to perceive; but under difficult listening conditions speakers pay more attention to marking word boundaries before weak syllables, i.e. they mark those boundaries which are otherwise particularly hard to perceive.
  • Cutler, A., & Fay, D. A. (1982). One mental lexicon, phonologically arranged: Comments on Hurford’s comments. Linguistic Inquiry, 13, 107-113. Retrieved from http://www.jstor.org/stable/4178262.
  • Cutler, A. (1986). Phonological structure in speech recognition. Phonology Yearbook, 3, 161-178. Retrieved from http://www.jstor.org/stable/4615397.

    Abstract

    Two bodies of recent research from experimental psycholinguistics are summarised, each of which is centred upon a concept from phonology: LEXICAL STRESS and the SYLLABLE. The evidence indicates that neither construct plays a role in prelexical representations during speech recog- nition. Both constructs, however, are well supported by other performance evidence. Testing phonological claims against performance evidence from psycholinguistics can be difficult, since the results of studies designed to test processing models are often of limited relevance to phonological theory.
  • Cutler, A., Otake, T., & Bruggeman, L. (2012). Phonologically determined asymmetries in vocabulary structure across languages. Journal of the Acoustical Society of America, 132(2), EL155-EL160. doi:10.1121/1.4737596.

    Abstract

    Studies of spoken-word recognition have revealed that competition from embedded words differs in strength as a function of where in the carrier word the embedded word is found and have further shown embedding patterns to be skewed such that embeddings in initial position in carriers outnumber embeddings in final position. Lexico-statistical analyses show that this skew is highly attenuated in Japanese, a noninflectional language. Comparison of the extent of the asymmetry in the three Germanic languages English, Dutch, and German allows the source to be traced to a combination of suffixal morphology and vowel reduction in unstressed syllables.
  • Cutler, A. (1980). Productivity in word formation. In J. Kreiman, & A. E. Ojeda (Eds.), Papers from the Sixteenth Regional Meeting, Chicago Linguistic Society (pp. 45-51). Chicago, Ill.: CLS.
  • Cutler, A. (1982). Prosody and sentence perception in English. In J. Mehler, E. C. Walker, & M. Garrett (Eds.), Perspectives on mental representation: Experimental and theoretical studies of cognitive processes and capacities (pp. 201-216). Hillsdale, N.J: Erlbaum.
  • Cutler, A., & Swinney, D. A. (1986). Prosody and the development of comprehension. Journal of Child Language, 14, 145-167.

    Abstract

    Four studies are reported in which young children’s response time to detect word targets was measured. Children under about six years of age did not show response time advantage for accented target words which adult listeners show. When semantic focus of the target word was manipulated independently of accent, children of about five years of age showed an adult-like response time advantage for focussed targets, but children younger than five did not. Id is argued that the processing advantage for accented words reflect the semantic role of accent as an expression of sentence focus. Processing advantages for accented words depend on the prior development of representations of sentence semantic structure, including the concept of focus. The previous literature on the development of prosodic competence shows an apparent anomaly in that young children’s productive skills appear to outstrip their receptive skills; however, this anomaly disappears if very young children’s prosody is assumed to be produced without an underlying representation of the relationship between prosody and semantics.
  • Cutler, A. (1997). Prosody and the structure of the message. In Y. Sagisaka, N. Campbell, & N. Higuchi (Eds.), Computing prosody: Computational models for processing spontaneous speech (pp. 63-66). Heidelberg: Springer.
  • Cutler, A., Dahan, D., & Van Donselaar, W. (1997). Prosody in the comprehension of spoken language: A literature review. Language and Speech, 40, 141-201.

    Abstract

    Research on the exploitation of prosodic information in the recognition of spoken language is reviewed. The research falls into three main areas: the use of prosody in the recognition of spoken words, in which most attention has been paid to the question of whether the prosodic structure of a word plays a role in initial contact with stored lexical representations; the use of prosody in the computation of syntactic structure, in which the resolution of global and local ambiguities has formed the central focus; and the role of prosody in the processing of discourse structure, in which there has been a preponderance of work on the contribution of accentuation and deaccentuation to integration of concepts with an existing discourse model. The review reveals that in each area progress has been made towards new conceptions of prosody's role in processing, and in particular this has involved abandonment of previously held deterministic views of the relationship between prosodic structure and other aspects of linguistic structure
  • Cutler, A., & Ladd, D. R. (Eds.). (1983). Prosody: Models and measurements. Heidelberg: Springer.
  • Cutler, A. (1997). The comparative perspective on spoken-language processing. Speech Communication, 21, 3-15. doi:10.1016/S0167-6393(96)00075-1.

    Abstract

    Psycholinguists strive to construct a model of human language processing in general. But this does not imply that they should confine their research to universal aspects of linguistic structure, and avoid research on language-specific phenomena. First, even universal characteristics of language structure can only be accurately observed cross-linguistically. This point is illustrated here by research on the role of the syllable in spoken-word recognition, on the perceptual processing of vowels versus consonants, and on the contribution of phonetic assimilation phonemena to phoneme identification. In each case, it is only by looking at the pattern of effects across languages that it is possible to understand the general principle. Second, language-specific processing can certainly shed light on the universal model of language comprehension. This second point is illustrated by studies of the exploitation of vowel harmony in the lexical segmentation of Finnish, of the recognition of Dutch words with and without vowel epenthesis, and of the contribution of different kinds of lexical prosodic structure (tone, pitch accent, stress) to the initial activation of candidate words in lexical access. In each case, aspects of the universal processing model are revealed by analysis of these language-specific effects. In short, the study of spoken-language processing by human listeners requires cross-linguistic comparison.
  • Cutler, A. (1983). Semantics, syntax and sentence accent. In M. Van den Broecke, & A. Cohen (Eds.), Proceedings of the Tenth International Congress of Phonetic Sciences (pp. 85-91). Dordrecht: Foris.

Share this page