Publications

Displaying 1 - 100 of 614
  • Acheson, D. J., & MacDonald, M. C. (2009). Twisting tongues and memories: Explorations of the relationship between language production and verbal working memory. Journal of Memory and Language, 60(3), 329-350. doi:10.1016/j.jml.2008.12.002.

    Abstract

    Many accounts of working memory posit specialized storage mechanisms for the maintenance of serial order. We explore an alternative, that maintenance is achieved through temporary activation in the language production architecture. Four experiments examined the extent to which the phonological similarity effect can be explained as a sublexical speech error. Phonologically similar nonword stimuli were ordered to create tongue twister or control materials used in four tasks: reading aloud, immediate spoken recall, immediate typed recall, and serial recognition. Dependent measures from working memory (recall accuracy) and language production (speech errors) fields were used. Even though lists were identical except for item order, robust effects of tongue twisters were observed. Speech error analyses showed that errors were better described as phoneme rather than item ordering errors. The distribution of speech errors was comparable across all experiments and exhibited syllable-position effects, suggesting an important role for production processes. Implications for working memory and language production are discussed.
  • Acheson, D. J., & MacDonald, M. C. (2009). Verbal working memory and language production: Common approaches to the serial ordering of verbal information. Psychological Bulletin, 135(1), 50-68. doi:10.1037/a0014411.

    Abstract

    Verbal working memory (WM) tasks typically involve the language production architecture for recall; however, language production processes have had a minimal role in theorizing about WM. A framework for understanding verbal WM results is presented here. In this framework, domain-specific mechanisms for serial ordering in verbal WM are provided by the language production architecture, in which positional, lexical, and phonological similarity constraints are highly similar to those identified in the WM literature. These behavioral similarities are paralleled in computational modeling of serial ordering in both fields. The role of long-term learning in serial ordering performance is emphasized, in contrast to some models of verbal WM. Classic WM findings are discussed in terms of the language production architecture. The integration of principles from both fields illuminates the maintenance and ordering mechanisms for verbal information.
  • Adank, P., & Janse, E. (2009). Perceptual learning of time-compressed and natural fast speech. Journal of the Acoustical Society of America, 126(5), 2649-2659. doi:10.1121/1.3216914.

    Abstract

    Speakers vary their speech rate considerably during a conversation, and listeners are able to quickly adapt to these variations in speech rate. Adaptation to fast speech rates is usually measured using artificially time-compressed speech. This study examined adaptation to two types of fast speech: artificially time-compressed speech and natural fast speech. Listeners performed a speeded sentence verification task on three series of sentences: normal-speed sentences, time-compressed sentences, and natural fast sentences. Listeners were divided into two groups to evaluate the possibility of transfer of learning between the time-compressed and natural fast conditions. The first group verified the natural fast before the time-compressed sentences, while the second verified the time-compressed before the natural fast sentences. The results showed transfer of learning when the time-compressed sentences preceded the natural fast sentences, but not when natural fast sentences preceded the time-compressed sentences. The results are discussed in the framework of theories on perceptual learning. Second, listeners show adaptation to the natural fast sentences, but performance for this type of fast speech does not improve to the level of time-compressed sentences.
  • Ambridge, B., Pine, J. M., Rowland, C. F., Jones, R. L., & Clark, V. (2009). A Semantics-Based Approach to the “no negative evidence” problem. Cognitive Science, 33(7), 1301-1316. doi:10.1111/j.1551-6709.2009.01055.x.

    Abstract

    Previous studies have shown that children retreat from argument-structure overgeneralization errors (e.g., *Don’t giggle me) by inferring that frequently encountered verbs are unlikely to be grammatical in unattested constructions, and by making use of syntax-semantics correspondences (e.g., verbs denoting internally caused actions such as giggling cannot normally be used causatively). The present study tested a new account based on a unitary learning mechanism that combines both of these processes. Seventy-two participants (ages 5–6, 9–10, and adults) rated overgeneralization errors with higher (*The funny man’s joke giggled Bart) and lower (*The funny man giggled Bart) degrees of direct external causation. The errors with more-direct causation were rated as less unacceptable than those with less-direct causation. This finding is consistent with the new account, under which children acquire—in an incremental and probabilistic fashion—the meaning of particular constructions (e.g., transitive causative = direct external causation) and particular verbs, rejecting generalizations where the incompatibility between the two is too great.
  • Ambridge, B., Rowland, C. F., Theakston, A. L., & Tomasello, M. (2006). Comparing different accounts of inversion errors in children's non-subject wh-questions: ‘What experimental data can tell us?’. Journal of Child Language, 33(3), 519-557. doi:10.1017/S0305000906007513.

    Abstract

    This study investigated different accounts of children's acquisition of non-subject wh-questions. Questions using each of 4 wh-words (what, who, how and why), and 3 auxiliaries (BE, DO and CAN) in 3sg and 3pl form were elicited from 28 children aged 3;6–4;6. Rates of non-inversion error (Who she is hitting?) were found not to differ by wh-word, auxiliary or number alone, but by lexical auxiliary subtype and by wh-word+lexical auxiliary combination. This finding counts against simple rule-based accounts of question acquisition that include no role for the lexical subtype of the auxiliary, and suggests that children may initially acquire wh-word+lexical auxiliary combinations from the input. For DO questions, auxiliary-doubling errors (What does she does like?) were also observed, although previous research has found that such errors are virtually non-existent for positive questions. Possible reasons for this discrepancy are discussed.
  • Ambridge, B., & Rowland, C. F. (2009). Predicting children's errors with negative questions: Testing a schema-combination account. Cognitive Linguistics, 20(2), 225-266. doi:10.1515/COGL.2009.014.

    Abstract

    Positive and negative what, why and yes/no questions with the 3sg auxiliaries can and does were elicited from 50 children aged 3;3–4;3. In support of the constructivist “schema-combination” account, only children who produced a particular positive question type correctly (e.g., What does she want?) produced a characteristic “auxiliary-doubling” error (e.g., *What does she doesn't want?) for the corresponding negative question type. This suggests that these errors are formed by superimposing a positive question frame (e.g., What does THING PROCESS?) and an inappropriate negative frame (e.g., She doesn't PROCESS) learned from declarative utterances. In addition, a significant correlation between input frequency and correct production was observed for 11 of the 12 lexical frames (e.g., What does THING PROCESS?), although some negative question types showed higher rates of error than one might expect based on input frequency alone. Implications for constructivist and generativist theories of question-acquisition are discussed.
  • Ameka, F. K. (1999). [Review of M. E. Kropp Dakubu: Korle meets the sea: a sociolinguistic history of Accra]. Bulletin of the School of Oriental and African Studies, 62, 198-199. doi:10.1017/S0041977X0001836X.
  • Ameka, F. K. (1989). [Review of The case for lexicase: An outline of lexicase grammatical theory by Stanley Starosta]. Studies in Language, 13(2), 506-518.
  • Ameka, F. K. (1999). Partir c'est mourir un peu: Universal and culture specific features of leave taking. RASK International Journal of Language and Communication, 9/10, 257-283.
  • Ameka, F. K. (1999). Spatial information packaging in Ewe and Likpe: A comparative perspective. Frankfurter Afrikanistische Blätter, 11, 7-34.
  • Ameka, F. K. (1999). The typology and semantics of complex nominal duplication in Ewe. Anthropological Linguistics, 41, 75-106.
  • Ameka, F. K. (2009). Verb extensions in Likpe (Sɛkpɛlé). Journal of West African Languages, 36(1/2), 139-157.
  • Araújo, S., Faísca, L., Petersson, K. M., & Reis, A. (2009). Cognitive profiles in Portuguese children with dyslexia. In Abstracts presented at the International Neuropsychological Society, Finnish Neuropsychological Society, Joint Mid-Year Meeting July 29-August 1, 2009. Helsinki, Finland & Tallinn, Estonia (pp. 23). Retrieved from http://www.neuropsykologia.fi/ins2009/INS_MY09_Abstract.pdf.
  • Araújo, S., Faísca, L., Petersson, K. M., & Reis, A. (2009). Visual processing factors contribute to object naming difficulties in dyslexic readers. In Abstracts presented at the International Neuropsychological Society, Finnish Neuropsychological Society, Joint Mid-Year Meeting July 29-August 1, 2009. Helsinki, Finland & Tallinn, Estonia (pp. 39). Retrieved from http://www.neuropsykologia.fi/ins2009/INS_MY09_Abstract.pdf.
  • Baayen, R. H., Feldman, L. B., & Schreuder, R. (2006). Morphological influences on the recognition of monosyllabic monomorphemic words. Journal of Memory and Language, 55(2), 290-313. doi:10.1016/j.jml.2006.03.008.

    Abstract

    Balota et al. [Balota, D., Cortese, M., Sergent-Marshall, S., Spieler, D., & Yap, M. (2004). Visual word recognition for single-syllable words. Journal of Experimental Psychology: General, 133, 283–316] studied lexical processing in word naming and lexical decision using hierarchical multiple regression techniques for a large data set of monosyllabic, morphologically simple words. The present study supplements their work by making use of more flexible regression techniques that are better suited for dealing with collinearity and non-linearity, and by documenting the contributions of several variables that they did not take into account. In particular, we included measures of morphological connectivity, as well as a new frequency count, the frequency of a word in speech rather than in writing. The morphological measures emerged as strong predictors in visual lexical decision, but not in naming, providing evidence for the importance of morphological connectivity even for the recognition of morphologically simple words. Spoken frequency was predictive not only for naming but also for visual lexical decision. In addition, it co-determined subjective frequency estimates and norms for age of acquisition. Finally, we show that frequency predominantly reflects conceptual familiarity rather than familiarity with a word’s form.
  • Bastiaanse, R., De Goede, D., & Love, T. (2009). Auditory sentence processing: An introduction. Journal of Psycholinguistic Research, 38(3), 177-179. doi:10.1007/s10936-009-9109-3.
  • Bastiaansen, M. C. M., Böcker, K. B. E., Cluitmans, P. J. M., & Brunia, C. H. M. (1999). Event-related desynchronization related to the anticipation of a stimulus providing knowledge of results. Clinical Neurophysiology, 110, 250-260.

    Abstract

    In the present paper, event-related desynchronization (ERD) in the alpha and beta frequency bands is quantified in order to investigate the processes related to the anticipation of a knowledge of results (KR) stimulus. In a time estimation task, 10 subjects were instructed to press a button 4 s after the presentation of an auditory stimulus. Two seconds after the response they received auditory or visual feedback on the timing of their response. Preceding the button press, a centrally maximal ERD is found. Preceding the visual KR stimulus, an ERD is present that has an occipital maximum. Contrary to expectation, preceding the auditory KR stimulus there are no signs of a modalityspecific ERD. Results are related to a thalamo-cortical gating model which predicts a correspondence between negative slow potentials and ERD during motor preparation and stimulus anticipation.
  • Bauer, B. L. M. (1999). Aspects of impersonal constructions in Late Latin. In H. Petersmann, & R. Kettelmann (Eds.), Latin vulgaire – latin tardif V (pp. 209-211). Heidelberg: Winter.
  • Bauer, B. L. M. (1996). Residues of non-nominative syntax in Latin: The MIHI EST construction. Historische Sprachforschung, 109(2), 242-257.
  • Berck, P., Bibiko, H.-J., Kemps-Snijders, M., Russel, A., & Wittenburg, P. (2006). Ontology-based language archive utilization. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 2295-2298).
  • Bethard, S., Lai, V. T., & Martin, J. (2009). Topic model analysis of metaphor frequency for psycholinguistic stimuli. In Proceedings of the NAACL HLT Workshop on Computational Approaches to Linguistic Creativity, Boulder, Colorado, June 4, 2009 (pp. 9-16). Stroudsburg, PA: Association for Computational Linguistics.

    Abstract

    Psycholinguistic studies of metaphor processing must control their stimuli not just for word frequency but also for the frequency with which a term is used metaphorically. Thus, we consider the task of metaphor frequency estimation, which predicts how often target words will be used metaphorically. We develop metaphor classifiers which represent metaphorical domains through Latent Dirichlet Allocation, and apply these classifiers to the target words, aggregating their decisions to estimate the metaphorical frequencies. Training on only 400 sentences, our models are able to achieve 61.3 % accuracy on metaphor classification and 77.8 % accuracy on HIGH vs. LOW metaphorical frequency estimation.
  • Bock, K., Butterfield, S., Cutler, A., Cutting, J. C., Eberhard, K. M., & Humphreys, K. R. (2006). Number agreement in British and American English: Disagreeing to agree collectively. Language, 82(1), 64-113.

    Abstract

    British andAmerican speakers exhibit different verb number agreement patterns when sentence subjects have collective headnouns. From linguistic andpsycholinguistic accounts of how agreement is implemented, three alternative hypotheses can be derived to explain these differences. The hypotheses involve variations in the representation of notional number, disparities in how notional andgrammatical number are used, and inequalities in the grammatical number specifications of collective nouns. We carriedout a series of corpus analyses, production experiments, andnorming studies to test these hypotheses. The results converge to suggest that British and American speakers are equally sensitive to variations in notional number andimplement subjectverb agreement in much the same way, but are likely to differ in the lexical specifications of number for collectives. The findings support a psycholinguistic theory that explains verb and pronoun agreement within a parallel architecture of lexical andsyntactic formulation.
  • Böcker, K. B. E., Bastiaansen, M. C. M., Vroomen, J., Brunia, C. H. M., & de Gelder, B. (1999). An ERP correlate of metrical stress in spoken word recognition. Psychophysiology, 36, 706-720. doi:10.1111/1469-8986.3660706.

    Abstract

    Rhythmic properties of spoken language such as metrical stress, that is, the alternation of strong and weak syllables, are important in speech recognition of stress-timed languages such as Dutch and English. Nineteen subjects listened passively to or discriminated actively between sequences of bisyllabic Dutch words, which started with either a weak or a strong syllable. Weak-initial words, which constitute 12% of the Dutch lexicon, evoked more negativity than strong-initial words in the interval between P2 and N400 components of the auditory event-related potential. This negativity was denoted as N325. The N325 was larger during stress discrimination than during passive listening. N325 was also larger when a weak-initial word followed sequence of strong-initial words than when it followed words with the same stress pattern. The latter difference was larger for listeners who performed well on stress discrimination. It was concluded that the N325 is probably a manifestation of the extraction of metrical stress from the acoustic signal and its transformation into task requirements.
  • Bod, R., Fitz, H., & Zuidema, W. (2006). On the structural ambiguity in natural language that the neural architecture cannot deal with [Commentary]. Behavioral and Brain Sciences, 29, 71-72. doi:10.1017/S0140525X06239025.

    Abstract

    We argue that van der Velde's & de Kamps's model does not solve the binding problem but merely shifts the burden of constructing appropriate neural representations of sentence structure to unexplained preprocessing of the linguistic input. As a consequence, their model is not able to explain how various neural representations can be assigned to sentences that are structurally ambiguous.
  • Boves, L., Carlson, R., Hinrichs, E., House, D., Krauwer, S., Lemnitzer, L., Vainio, M., & Wittenburg, P. (2009). Resources for speech research: Present and future infrastructure needs. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 1803-1806).

    Abstract

    This paper introduces the EU-FP7 project CLARIN, a joint effort of over 150 institutions in Europe, aimed at the creation of a sustainable language resources and technology infrastructure for the humanities and social sciences research community. The paper briefly introduces the vision behind the project and how it relates to speech research with a focus on the contributions that CLARIN can and will make to research in spoken language processing.
  • Bowerman, M. (1976). Commentary on M.D.S. Braine, “Children's first word combinations”. Monographs of the Society for Research in Child Development, 41(1), 98-104. Retrieved from http://www.jstor.org/stable/1165959.
  • Bowerman, M. (1996). Argument structure and learnability: Is a solution in sight? In J. Johnson, M. L. Juge, & J. L. Moxley (Eds.), Proceedings of the Twenty-second Annual Meeting of the Berkeley Linguistics Society, February 16-19, 1996. General Session and Parasession on The Role of Learnability in Grammatical Theory (pp. 454-468). Berkeley Linguistics Society.
  • Bramão, I., Faísca, L., Forkstam, C., Inácio, K., Petersson, K. M., & Reis, A. (2009). Interaction between perceptual color and color knowledge information in object recognition: Behavioral and electrophysiological evidence. In Abstracts presented at the International Neuropsychological Society, Finnish Neuropsychological Society, Joint Mid-Year Meeting July 29-August 1, 2009. Helsinki, Finland & Tallinn, Estonia (pp. 39). Retrieved from http://www.neuropsykologia.fi/ins2009/INS_MY09_Abstract.pdf.
  • Brandt, S., Kidd, E., Lieven, E., & Tomasello, M. (2009). The discourse bases of relativization: An investigation of young German and English-speaking children's comprehension of relative clauses. Cognitive Linguistics, 20(3), 539-570. doi:10.1515/COGL.2009.024.

    Abstract

    In numerous comprehension studies, across different languages, children have performed worse on object relatives (e.g., the dog that the cat chased) than on subject relatives (e.g., the dog that chased the cat). One possible reason for this is that the test sentences did not exactly match the kinds of object relatives that children typically experience. Adults and children usually hear and produce object relatives with inanimate heads and pronominal subjects (e.g., the car that we bought last year) (cf. Kidd et al., Language and Cognitive Processes 22: 860–897, 2007). We tested young 3-year old German- and English-speaking children with a referential selection task. Children from both language groups performed best in the condition where the experimenter described inanimate referents with object relatives that contained pronominal subjects (e.g., Can you give me the sweater that he bought?). Importantly, when the object relatives met the constraints identified in spoken discourse, children understood them as well as subject relatives, or even better. These results speak against a purely structural explanation for children's difficulty with object relatives as observed in previous studies, but rather support the usage-based account, according to which discourse function and experience with language shape the representation of linguistic structures.
  • Braun, B. (2006). Phonetics and phonology of thematic contrast in German. Language and Speech, 49(4), 451-493.

    Abstract

    It is acknowledged that contrast plays an important role in understanding discourse and information structure. While it is commonly assumed that contrast can be marked by intonation only, our understanding of the intonational realization of contrast is limited. For German there is mainly introspective evidence that the rising theme accent (or topic accent) is realized differently when signaling contrast than when not. In this article, the acoustic basis for the reported impressionistic differences is investigated in terms of the scaling (height) and alignment (positioning) of tonal targets.

    Subjects read target sentences in a contrastive and a noncontrastive context (Experiment 1). Prosodic annotation revealed that thematic accents were not realized with different accent types in the two contexts but acoustic comparison showed that themes in contrastive context exhibited a higher and later peak. The alignment and scaling of accents can hence be controlled in a linguistically meaningful way, which has implications for intonational phonology. In Experiment 2, nonlinguists' perception of a subset of the production data was assessed. They had to choose whether, in a contrastive context, the presumed contrastive or noncontrastive realization of a sentence was more appropriate. For some sentence pairs only, subjects had a clear preference. For Experiment 3, a group of linguists annotated the thematic accents of the contrastive and noncontrastive versions of the same data as used in Experiment 2. There was considerable disagreement in labels, but different accent types were consistently used when the two versions differed strongly in F0 excursion. Although themes in contrastive contexts were clearly produced differently than themes in noncontrastive contexts, this difference is not easily perceived or annotated.
  • Braun, B., Kochanski, G., Grabe, E., & Rosner, B. S. (2006). Evidence for attractors in English intonation. Journal of the Acoustical Society of America, 119(6), 4006-4015. doi:10.1121/1.2195267.

    Abstract

    Although the pitch of the human voice is continuously variable, some linguists contend that intonation in speech is restricted to a small, limited set of patterns. This claim is tested by asking subjects to mimic a block of 100 randomly generated intonation contours and then to imitate themselves in several successive sessions. The produced f0 contours gradually converge towards a limited set of distinct, previously recognized basic English intonation patterns. These patterns are "attractors" in the space of possible intonation English contours. The convergence does not occur immediately. Seven of the ten participants show continued convergence toward their attractors after the first iteration. Subjects retain and use information beyond phonological contrasts, suggesting that intonational phonology is not a complete description of their mental representation of intonation.
  • Broeder, D., Offenga, F., Wittenburg, P., Van de Kamp, P., Nathan, D., & Strömqvist, S. (2006). Technologies for a federation of language resource archive. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 2291-2294).
  • Broeder, D., & Wittenburg, P. (2006). The IMDI metadata framework, its current application and future direction. International Journal of Metadata, Semantics and Ontologies, 1(2), 119-132. doi:10.1504/IJMSO.2006.011008.

    Abstract

    The IMDI Framework offers next to a suitable set of metadata descriptors for language resources, a set of tools and an infrastructure to use these. This paper gives an overview of all these aspects and at the end describes the intentions and hopes for ensuring the interoperability of the IMDI framework within more general ones in development. An evaluation of the current state of the IMDI Framework is presented with an analysis of the benefits and more problematic issues. Finally we describe work on issues of long-term stability for IMDI by linking up to the work done within the ISO TC37/SC4 subcommittee (TC37/SC4).
  • Broeder, D., Auer, E., & Wittenburg, P. (2006). Unique resource identifiers. Language Archive Newsletter, no. 8, 8-9.
  • Broeder, D., Van Veenendaal, R., Nathan, D., & Strömqvist, S. (2006). A grid of language resource repositories. In Proceedings of the 2nd IEEE International Conference on e-Science and Grid Computing.
  • Broeder, D., Claus, A., Offenga, F., Skiba, R., Trilsbeek, P., & Wittenburg, P. (2006). LAMUS: The Language Archive Management and Upload System. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 2291-2294).
  • Broersma, M. (2006). Nonnative listeners rely less on phonetic information for phonetic categorization than native listeners. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 109-110).
  • Broersma, M., & De Bot, K. (2006). Triggered codeswitching: A corpus-based evaluation of the original triggering hypothesis and a new alternative. Bilingualism: Language and Cognition, 9(1), 1-13. doi:10.1017/S1366728905002348.

    Abstract

    In this article the triggering hypothesis for codeswitching proposed by Michael Clyne is discussed and tested. According to this hypothesis, cognates can facilitate codeswitching of directly preceding or following words. It is argued that the triggering hypothesis in its original form is incompatible with language production models, as it assumes that language choice takes place at the surface structure of utterances, while in bilingual production models language choice takes place along with lemma selection. An adjusted version of the triggering hypothesis is proposed in which triggering takes place during lemma selection and the scope of triggering is extended to basic units in language production. Data from a Dutch–Moroccan Arabic corpus are used for a statistical test of the original and the adjusted triggering theory. The codeswitching patterns found in the data support part of the original triggering hypothesis, but they are best explained by the adjusted triggering theory.
  • Broersma, M. (2006). Accident - execute: Increased activation in nonnative listening. In Proceedings of Interspeech 2006 (pp. 1519-1522).

    Abstract

    Dutch and English listeners’ perception of English words with partially overlapping onsets (e.g., accident- execute) was investigated. Partially overlapping words remained active longer for nonnative listeners, causing an increase of lexical competition in nonnative compared with native listening.
  • Broersma, M. (2009). Triggered codeswitching between cognate languages. Bilingualism: Language and Cognition, 12(4), 447-462. doi:10.1017/S1366728909990204.
  • Brouwer, G. J., Tong, F., Hagoort, P., & Van Ee, R. (2009). Perceptual incongruence influences bistability and cortical activation. Plos One, 4(3): e5056. doi:10.1371/journal.pone.0005056.

    Abstract

    We employed a parametric psychophysical design in combination with functional imaging to examine the influence of metric changes in perceptual incongruence on perceptual alternation rates and cortical responses. Subjects viewed a bistable stimulus defined by incongruent depth cues; bistability resulted from incongruence between binocular disparity and monocular perspective cues that specify different slants (slant rivalry). Psychophysical results revealed that perceptual alternation rates were positively correlated with the degree of perceived incongruence. Functional imaging revealed systematic increases in activity that paralleled the psychophysical results within anterior intraparietal sulcus, prior to the onset of perceptual alternations. We suggest that this cortical activity predicts the frequency of subsequent alternations, implying a putative causal role for these areas in initiating bistable perception. In contrast, areas implicated in form and depth processing (LOC and V3A) were sensitive to the degree of slant, but failed to show increases in activity when these cues were in conflict.
  • Brown, C. M., Hagoort, P., & Ter Keurs, M. (1999). Electrophysiological signatures of visual lexical processing: open en closed-class words. Journal of Cognitive Neuroscience, 11(3), 261-281.

    Abstract

    In this paper presents evidence of the disputed existence of an electrophysiological marker for the lexical-categorical distinction between open- and closed-class words. Event-related brain potentials were recorded from the scalp while subjects read a story. Separate waveforms were computed for open- and closed-class words. Two aspects of the waveforms could be reliably related to vocabulary class. The first was an early negativity in the 230- to 350-msec epoch, with a bilateral anterior predominance. This negativity was elicited by open- and closed-class words alike, was not affected by word frequency or word length, and had an earlier peak latency for closed-class words. The second was a frontal slow negative shift in the 350- to 500-msec epoch, largest over the left side of the scalp. This late negativity was only elicited by closed-class words. Although the early negativity cannot serve as a qualitative marker of the open- and closed-class distinction, it does reflect the earliest electrophysiological manifestation of the availability of categorical information from the mental lexicon. These results suggest that the brain honors the distinction between open- and closed-class words, in relation to the different roles that they play in on-line sentence processing.
  • Brown, P. (1999). Anthropologie cognitive. Anthropologie et Sociétés, 23(3), 91-119.

    Abstract

    In reaction to the dominance of universalism in the 1970s and '80s, there have recently been a number of reappraisals of the relation between language and cognition, and the field of cognitive anthropology is flourishing in several new directions in both America and Europe. This is partly due to a renewal and re-evaluation of approaches to the question of linguistic relativity associated with Whorf, and partly to the inspiration of modern developments in cognitive science. This review briefly sketches the history of cognitive anthropology and surveys current research on both sides of the Atlantic. The focus is on assessing current directions, considering in particular, by way of illustration, recent work in cultural models and on spatial language and cognition. The review concludes with an assessment of how cognitive anthropology could contribute directly both to the broader project of cognitive science and to the anthropological study of how cultural ideas and practices relate to structures and processes of human cognition.
  • Brown, P. (1989). [Review of the book Language, gender, and sex in comparative perspective ed. by Susan U. Philips, Susan Steeleand Christine Tanz]. Man, 24(1), 192.
  • Brown, P. (2006). Language, culture and cognition: The view from space. Zeitschrift für Germanistische Linguistik, 34, 64-86.

    Abstract

    This paper addresses the vexed questions of how language relates to culture, and what kind of notion of culture is important for linguistic explanation. I first sketch five perspectives - five different construals - of culture apparent in linguistics and in cognitive science more generally. These are: (i) culture as ethno-linguistic group, (ii) culture as a mental module, (iii) culture as knowledge, (iv) culture as context, and (v) culture as a process emergent in interaction. I then present my own work on spatial language and cognition in a Mayan languge and culture, to explain why I believe a concept of culture is important for linguistics. I argue for a core role for cultural explanation in two domains: in analysing the semantics of words embedded in cultural practices which color their meanings (in this case, spatial frames of reference), and in characterizing thematic and functional links across different domains in the social and semiotic life of a particular group of people.
  • Brown, P. (1999). Repetition [Encyclopedia entry for 'Lexicon for the New Millenium', ed. Alessandro Duranti]. Journal of Linguistic Anthropology, 9(2), 223-226. doi:10.1525/jlin.1999.9.1-2.223.

    Abstract

    This is an encyclopedia entry describing conversational and interactional uses of linguistic repetition.
  • Brown, P. (1976). Women and politeness: A new perspective on language and society. Reviews in Anthropology, 3, 240-249.
  • Brucato, N., Cassar, O., Tonasso, L., Guitard, E., Migot-Nabias, F., Tortevoye, P., Plancoulaine, S., Larrouy, G., Gessain, A., & Dugoujon, J.-M. (2009). Genetic diversity and dynamics of the Noir Marron settlement in French Guyana: A study combining mitochondrial DNA, Y chromosome and HTLV-1 genotyping [Abstract]. AIDS Research and Human Retroviruses, 25(11), 1258. doi:10.1089/aid.2009.9992.

    Abstract

    The Noir Marron are the direct descendants of thousands of African slaves deported to the Guyanas during the Atlantic Slave Trade and later escaped mainly from Dutch colonial plantations. Six ethnic groups are officially recognized, four of which are located in French Guyana: the Aluku, the Ndjuka, the Saramaka, and the Paramaka. The aim of this study was: (1) to determine the Noir Marron settlement through genetic exchanges with other communities such as Amerindians and Europeans; (2) to retrace their origins in Africa. Buffy-coat DNA from 142 Noir Marron, currently living in French Guyana, were analyzed using mtDNA (typing of SNP coding regions and sequencing of HVSI/II) and Y chromosomes (typing STR and SNPs) to define their genetic profile. Results were compared to an African database composed by published data, updated with genotypes of 82 Fon from Benin, and 128 Ahizi and 63 Yacouba from the Ivory-Coast obtained in this study for the same markers. Furthermore, the determination of the genomic subtype of HTLV-1 strains (env gp21 and LTR regions), which can be used as a marker of migration of infected populations, was performed for samples from 23 HTLV-1 infected Noir Marron and compared with the corresponding database. MtDNA profiles showed a high haplotype diversity, in which 99% of samples belonged to the major haplogroup L, frequent in Africa. Each haplotype was largely represented on the West African coast, but notably higher homologies were obtained with the samples present in the Gulf of Guinea. Y Chromosome analysis revealed the same pattern, i.e. a conservation of the African contribution to the Noir Marron genetic profile, with 98% of haplotypes belonging to the major haplogroup E1b1a, frequent in West Africa. The genetic diversity was higher than those observed in African populations, proving the large Noir Marron’s fatherland, but a predominant identity in the Gulf of Guinea can be suggested. Concerning HTLV-1 genotyping, all the Noir Marron strains belonged to the large Cosmopolitan A subtype. However, among them 17/23 (74%) clustered with the West African clade comprizing samples originating from Ivory-Coast, Ghana, Burkina-Fasso and Senegal, while 3 others clustered in the Trans-Sahelian clade and the remaining 3 were similar to strains found in individuals in South America. Through the combined analyses of three approaches, we have provided a conclusive image of the genetic profile of the Noir Marron communities studied. The high degree of preservation of the African gene pool contradicts the expected gene flow that would correspond to the major cultural exchanges observed between Noir Marron, Europeans and Amerindians. Marital practices and historical events could explain these observations. Corresponding to historical and cultural data, the origin of the ethnic groups is widely dispatched throughout West Africa. However, all results converge to suggest an individualization from a major birthplace in the Gulf of Guinea.
  • Brucato, N., Tortevoye, P., Plancoulaine, S., Guitard, E., Sanchez-Mazas, A., Larrouy, G., Gessain, A., & Dugoujon, J.-M. (2009). The genetic diversity of three peculiar populations descending from the slave trade: Gm study of Noir Marron from French Guiana. Comptes Rendus Biologies, 332(10), 917-926. doi:10.1016/j.crvi.2009.07.005.

    Abstract

    The Noir Marron communities are the direct descendants of African slaves brought to the Guianas during the four centuries (16th to 19th) of the Atlantic slave trade. Among them, three major ethnic groups have been studied: the Aluku, the Ndjuka and the Saramaka. Their history led them to share close relationships with Europeans and Amerindians, as largely documented in their cultural records. The study of Gm polymorphisms of immunoglobulins may help to estimate the amount of gene flow linked to these cultural exchanges. Surprisingly, very low levels of European contribution (2.6%) and Amerindian contribution (1.7%) are detected in the Noir Marron gene pool. On the other hand, an African contribution of 95.7% redraws their origin to West Africa (FSTless-than-or-equals, slant0.15). This highly preserved African gene pool of the Noir Marron is unique in comparison to other African American populations of Latin America, who are notably more admixed

    Additional information

    Table 4
  • Brugman, H., Malaisé, V., & Gazendam, L. (2006). A web based general thesaurus browser to support indexing of television and radio programs. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 1488-1491).
  • Burenhult, N. (2009). [Commentary on M. Meschiari, 'Roots of the savage mind: Apophenia and imagination as cognitive process']. Quaderni di semantica, 30(2), 239-242. doi:10.1400/127893.
  • Burenhult, N. (2006). Body part terms in Jahai. Language Sciences, 28(2-3), 162-180. doi:10.1016/j.langsci.2005.11.002.

    Abstract

    This article explores the lexicon of body part terms in Jahai, a Mon-Khmer language spoken by a group of hunter–gatherers in the Malay Peninsula. It provides an extensive inventory of body part terms and describes their structural and semantic properties. The Jahai body part lexicon pays attention to fine anatomical detail but lacks labels for major, ‘higher-level’ categories, like ‘trunk’, ‘limb’, ‘arm’ and ‘leg’. In this lexicon it is therefore sometimes difficult to discern a clear partonomic hierarchy, a presumed universal of body part terminology.
  • Burenhult, N., & Wegener, C. (2009). Preliminary notes on the phonology, orthography and vocabulary of Semnam (Austroasiatic, Malay Peninsula). Journal of the Southeast Asian Linguistics Society, 1, 283-312. Retrieved from http://www.jseals.org/.

    Abstract

    This paper reports tentatively some features of Semnam, a Central Aslian language spoken by some 250 people in the Perak valley, Peninsular Malaysia. It outlines the unusually rich phonemic system of this hitherto undescribed language (e.g. a vowel system comprising 36 distinctive nuclei), and proposes a practical orthography for it. It also includes the c. 1,250- item wordlist on which the analysis is based, collected intermittently in the field 2006-2008.
  • Burnham, D., Ambikairajah, E., Arciuli, J., Bennamoun, M., Best, C. T., Bird, S., Butcher, A. R., Cassidy, S., Chetty, G., Cox, F. M., Cutler, A., Dale, R., Epps, J. R., Fletcher, J. M., Goecke, R., Grayden, D. B., Hajek, J. T., Ingram, J. C., Ishihara, S., Kemp, N. and 10 moreBurnham, D., Ambikairajah, E., Arciuli, J., Bennamoun, M., Best, C. T., Bird, S., Butcher, A. R., Cassidy, S., Chetty, G., Cox, F. M., Cutler, A., Dale, R., Epps, J. R., Fletcher, J. M., Goecke, R., Grayden, D. B., Hajek, J. T., Ingram, J. C., Ishihara, S., Kemp, N., Kinoshita, Y., Kuratate, T., Lewis, T. W., Loakes, D. E., Onslow, M., Powers, D. M., Rose, P., Togneri, R., Tran, D., & Wagner, M. (2009). A blueprint for a comprehensive Australian English auditory-visual speech corpus. In M. Haugh, K. Burridge, J. Mulder, & P. Peters (Eds.), Selected proceedings of the 2008 HCSNet Workshop on Designing the Australian National Corpus (pp. 96-107). Somerville, MA: Cascadilla Proceedings Project.

    Abstract

    Large auditory-visual (AV) speech corpora are the grist of modern research in speech science, but no such corpus exists for Australian English. This is unfortunate, for speech science is the brains behind speech technology and applications such as text-to-speech (TTS) synthesis, automatic speech recognition (ASR), speaker recognition and forensic identification, talking heads, and hearing prostheses. Advances in these research areas in Australia require a large corpus of Australian English. Here the authors describe a blueprint for building the Big Australian Speech Corpus (the Big ASC), a corpus of over 1,100 speakers from urban and rural Australia, including speakers of non-indigenous, indigenous, ethnocultural, and disordered forms of Australian English, each of whom would be sampled on three occasions in a range of speech tasks designed by the researchers who would be using the corpus.
  • Butterfield, S., & Cutler, A. (1988). Segmentation errors by human listeners: Evidence for a prosodic segmentation strategy. In W. Ainsworth, & J. Holmes (Eds.), Proceedings of SPEECH ’88: Seventh Symposium of the Federation of Acoustic Societies of Europe: Vol. 3 (pp. 827-833). Edinburgh: Institute of Acoustics.
  • Campisi, E. (2009). La gestualità co-verbale tra comunicazione e cognizione: In che senso i gesti sono intenzionali. In F. Parisi, & M. Primo (Eds.), Natura, comunicazione, neurofilosofie. Atti del III convegno 2009 del CODISCO. Rome: Squilibri.
  • Carlsson, K., Andersson, J., Petrovic, P., Petersson, K. M., Öhman, A., & Ingvar, M. (2006). Predictability modulates the affective and sensory-discriminative neural processing of pain. NeuroImage, 32(4), 1804-1814. doi:10.1016/j.neuroimage.2006.05.027.

    Abstract

    Knowing what is going to happen next, that is, the capacity to predict upcoming events, modulates the extent to which aversive stimuli induce stress and anxiety. We explored this issue by manipulating the temporal predictability of aversive events by means of a visual cue, which was either correlated or uncorrelated with pain stimuli (electric shocks). Subjects reported lower levels of anxiety, negative valence and pain intensity when shocks were predictable. In addition to attenuate focus on danger, predictability allows for correct temporal estimation of, and selective attention to, the sensory input. With functional magnetic resonance imaging, we found that predictability was related to enhanced activity in relevant sensory-discriminative processing areas, such as the primary and secondary sensory cortex and posterior insula. In contrast, the unpredictable more aversive context was correlated to brain activity in the anterior insula and the orbitofrontal cortex, areas associated with affective pain processing. This context also prompted increased activity in the posterior parietal cortex and lateral prefrontal cortex that we attribute to enhanced alertness and sustained attention during unpredictability.
  • Carota, F. (2006). Derivational morphology of Italian: Principles for formalization. Literary and Linguistic Computing, 21(SUPPL. 1), 41-53. doi:10.1093/llc/fql007.

    Abstract

    The present paper investigates the major derivational strategies underlying the formation of suffixed words in Italian, with the purpose of tackling the issue of their formalization. After having specified the theoretical cognitive premises that orient the work, the interacting component modules of the suffixation process, i.e. morphonology, morphotactics and affixal semantics, are explored empirically, by drawing ample naturally occurring data on a Corpus of written Italian. A special attention is paid to the semantic mechanisms that are involved into suffixation. Some semantic nuclei are identified for the major suffixed word types of Italian, which are due to word formation rules active at the synchronic level, and a semantic configuration of productive suffixes is suggested. A general framework is then sketched, which combines classical finite-state methods with a feature unification-based word grammar. More specifically, the semantic information specified for the affixal material is internalised into the structures of the Lexical Functional Grammar (LFG). The formal model allows us to integrate the various modules of suffixation. In particular, it treats, on the one hand, the interface between morphonology/morphotactics and semantics and, on the other hand, the interface between suffixation and inflection. Furthermore, since LFG exploits a hierarchically organised lexicon in order to structure the information regarding the affixal material, affixal co-selectional restrictions are advatageously constrained, avoiding potential multiple spurious analysis/generations.
  • Casasanto, D., Willems, R. M., & Hagoort, P. (2009). Body-specific representations of action verbs: Evidence from fMRI in right- and left-handers. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (pp. 875-880). Austin: Cognitive Science Society.

    Abstract

    According to theories of embodied cognition, understanding a verb like throw involves unconsciously simulating the action throwing, using areas of the brain that support motor planning. If understanding action words involves mentally simulating our own actions, then the neurocognitive representation of word meanings should differ for people with different kinds of bodies, who perform actions in systematically different ways. In a test of the body-specificity hypothesis (Casasanto, 2009), we used fMRI to compare premotor activity correlated with action verb understanding in right- and left-handers. Right-handers preferentially activated left premotor cortex during lexical decision on manual action verbs (compared with non-manual action verbs), whereas left-handers preferentially activated right premotor areas. This finding helps refine theories of embodied semantics, suggesting that implicit mental simulation during language processing is body-specific: Right and left-handers, who perform actions differently, use correspondingly different areas of the brain for representing action verb meanings.
  • Casasanto, D. (2009). Embodiment of abstract concepts: Good and bad in right- and left-handers. Journal of Experimental Psychology: General, 138, 351-367. doi:10.1037/a0015854.

    Abstract

    Do people with different kinds of bodies think differently? According to the body-specificity hypothesis, people who interact with their physical environments in systematically different ways should form correspondingly different mental representations. In a test of this hypothesis, 5 experiments investigated links between handedness and the mental representation of abstract concepts with positive or negative valence (e.g., honesty, sadness, intelligence). Mappings from spatial location to emotional valence differed between right- and left-handed participants. Right-handers tended to associate rightward space with positive ideas and leftward space with negative ideas, but left-handers showed the opposite pattern, associating rightward space with negative ideas and leftward with positive ideas. These contrasting mental metaphors for valence cannot be attributed to linguistic experience, because idioms in English associate good with right but not with left. Rather, right- and left-handers implicitly associated positive valence more strongly with the side of space on which they could act more fluently with their dominant hands. These results support the body-specificity hypothesis and provide evidence for the perceptuomotor basis of even the most abstract ideas.
  • Casasanto, D., & Jasmin, K. (2009). Emotional valence is body-specific: Evidence from spontaneous gestures during US presidential debates. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (pp. 1965-1970). Austin: Cognitive Science Society.

    Abstract

    What is the relationship between motor action and emotion? Here we investigated whether people associate good things more strongly with the dominant side of their bodies, and bad things with the non-dominant side. To find out, we analyzed spontaneous gestures during speech expressing ideas with positive or negative emotional valence (e.g., freedom, pain, compassion). Samples of speech and gesture were drawn from the 2004 and 2008 US presidential debates, which involved two left-handers (Obama, McCain) and two right-handers (Kerry, Bush). Results showed a strong association between the valence of spoken clauses and the hands used to make spontaneous co-speech gestures. In right-handed candidates, right-hand gestures were more strongly associated with positive-valence clauses, and left-hand gestures with negative-valence clauses. Left-handed candidates showed the opposite pattern. Right- and left-handers implicitly associated positive valence more strongly with their dominant hand: the hand they can use more fluently. These results support the body-specificity hypothesis, (Casasanto, 2009), and suggest a perceptuomotor basis for even our most abstract ideas.
  • Casasanto, D. (2009). [Review of the book Music, language, and the brain by Aniruddh D. Patel]. Language and Cognition, 1(1), 143-146. doi:10.1515/LANGCOG.2009.007.
  • Casasanto, D., Fotakopoulou, O., & Boroditsky, L. (2009). Space and time in the child's mind: Evidence for a cross-dimensional asymmetry. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (pp. 1090-1095). Austin: Cognitive Science Society.

    Abstract

    What is the relationship between space and time in the human mind? Studies in adults show an asymmetric relationship between mental representations of these basic dimensions of experience: representations of time depend on space more than representations of space depend on time. Here we investigated the relationship between space and time in the developing mind. Native Greek-speaking children (N=99) watched movies of two animals traveling along parallel paths for different distances or durations and judged the spatial and temporal aspects of these events (e.g., Which animal went for a longer time, or a longer distance?) Results showed a reliable cross-dimensional asymmetry: for the same stimuli, spatial information influenced temporal judgments more than temporal information influenced spatial judgments. This pattern was robust to variations in the age of the participants and the type of language used to elicit responses. This finding demonstrates a continuity between space-time representations in children and adults, and informs theories of analog magnitude representation.
  • Cavaco, P., Curuklu, B., & Petersson, K. M. (2009). Artificial grammar recognition using two spiking neural networks. Frontiers in Neuroinformatics. Conference abstracts: 2nd INCF Congress of Neuroinformatics. doi:10.3389/conf.neuro.11.2009.08.096.

    Abstract

    In this paper we explore the feasibility of artificial (formal) grammar recognition (AGR) using spiking neural networks. A biologically inspired minicolumn architecture is designed as the basic computational unit. A network topography is defined based on the minicolumn architecture, here referred to as nodes, connected with excitatory and inhibitory connections. Nodes in the network represent unique internal states of the grammar’s finite state machine (FSM). Future work to improve the performance of the networks is discussed. The modeling framework developed can be used by neurophysiological research to implement network layouts and compare simulated performance characteristics to actual subject performance.
  • Chen, Y., & Braun, B. (2006). Prosodic realization in information structure categories in standard Chinese. In R. Hoffmann, & H. Mixdorff (Eds.), Speech Prosody 2006. Dresden: TUD Press.

    Abstract

    This paper investigates the prosodic realization of information
    structure categories in Standard Chinese. A number of proper
    names with different tonal combinations were elicited as a
    grammatical subject in five pragmatic contexts. Results show
    that both duration and F0 range of the tonal realizations were
    adjusted to signal the information structure categories (i.e.
    theme vs. rheme and background vs. focus). Rhemes
    consistently induced a longer duration and a more expanded F0
    range than themes. Focus, compared to background, generally
    induced lengthening and F0 range expansion (the presence and
    magnitude of which, however, are dependent on the tonal
    structure of the proper names). Within the rheme focus
    condition, corrective rheme focus induced more expanded F0
    range than normal rheme focus.
  • Chen, A. (2006). Variations in the marking of focus in child language. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 113-114).
  • Chen, A. (2006). Interface between information structure and intonation in Dutch wh-questions. In R. Hoffmann, & H. Mixdorff (Eds.), Speech Prosody 2006. Dresden: TUD Press.

    Abstract

    This study set out to investigate how accent placement is pragmatically governed in WH-questions. Central to this issue are questions such as whether the intonation of the WH-word depends on the information structure of the non-WH word part, whether topical constituents can be accented, and whether constituents in the non-WH word part can be non-topical and accented. Previous approaches, based either on carefully composed examples or on read speech, differ in their treatments of these questions and consequently make opposing claims on the intonation of WH-questions. We addressed these questions by examining a corpus of 90 naturally occurring WH-questions, selected from the Spoken Dutch Corpus. Results show that the intonation of the WH-word is related to the information structure of the non-WH word part. Further, topical constituents can get accented and the accents are not necessarily phonetically reduced. Additionally, certain adverbs, which have no topical relation to the presupposition of the WH-questions, also get accented. They appear to function as a device for enhancing speaker engagement.
  • Chen, X. S., Collins, L. J., Biggs, P. J., & Penny, D. (2009). High throughput genome-wide survey of small RNAs from the parasitic protists giardia intestinalis and trichomonas vaginalis. Genome biology and evolution, 1, 165-175. doi:10.1093/gbe/evp017.

    Abstract

    RNA interference (RNAi) is a set of mechanisms which regulate gene expression in eukaryotes. Key elements of RNAi are small sense and antisense RNAs from 19 to 26 nucleotides generated from double-stranded RNAs. miRNAs are a major type of RNAi-associated small RNAs and are found in most eukaryotes studied to date. To investigate whether small RNAs associated with RNAi appear to be present in all eukaryotic lineages, and therefore present in the ancestral eukaryote, we studied two deep-branching protozoan parasites, Giardia intestinalis and Trichomonas vaginalis. Little is known about endogenous small RNAs involved in RNAi of these organisms. Using Illumina Solexa sequencing and genome-wide analysis of small RNAs from these distantly related deep-branching eukaryotes, we identified 10 strong miRNA candidates from Giardia and 11 from Trichomonas. We also found evidence of Giardia siRNAs potentially involved in the expression of variant-specific-surface proteins. In addition, 8 new snoRNAs from Trichomonas are identified. Our results indicate that miRNAs are likely to be general in ancestral eukaryotes, and therefore are likely to be a universal feature of eukaryotes.
  • Chen, A. (2009). Intonation and reference maintenance in Turkish learners of Dutch: A first insight. AILE - Acquisition et Interaction en Langue Etrangère, 28(2), 67-91.

    Abstract

    This paper investigates L2 learners’ use of intonation in reference maintenance in comparison to native speakers at three longitudinal points. Nominal referring expressions were elicited from two untutored Turkish learners of Dutch and five native speakers of Dutch via a film retelling task, and were analysed in terms of pitch span and word duration. Effects of two types of change in information states were examined, between new and given and between new and accessible. We found native-like use of word duration in both types of change early on but different performances between learners and development over time in one learner in the use of pitch span. Further, the use of morphosyntactic devices had different effects on the two learners. The inter-learner differences and late systematic use of pitch span, in spite of similar use of pitch span in learners’ L1 and L2, suggest that learning may play a role in the acquisition of intonation as a device for reference maintenance.
  • Chen, A. (2009). Perception of paralinguistic intonational meaning in a second language. Language Learning, 59(2), 367-409.
  • Cho, T., & McQueen, J. M. (2006). Phonological versus phonetic cues in native and non-native listening: Korean and Dutch listeners' perception of Dutch and English consonants. Journal of the Acoustical Society of America, 119(5), 3085-3096. doi:10.1121/1.2188917.

    Abstract

    We investigated how listeners of two unrelated languages, Korean and Dutch, process phonologically viable and nonviable consonants spoken in Dutch and American English. To Korean listeners, released final stops are nonviable because word-final stops in Korean are never released in words spoken in isolation, but to Dutch listeners, unreleased word-final stops are nonviable because word-final stops in Dutch are generally released in words spoken in isolation. Two phoneme monitoring experiments showed a phonological effect on both Dutch and English stimuli: Korean listeners detected the unreleased stops more rapidly whereas Dutch listeners detected the released stops more rapidly and/or more accurately. The Koreans, however, detected released stops more accurately than unreleased stops, but only in the non-native language they were familiar with (English). The results suggest that, in non-native speech perception, phonological legitimacy in the native language can be more important than the richness of phonetic information, though familiarity with phonetic detail in the non-native language can also improve listening performance.
  • Choi, S., McDonough, L., Bowerman, M., & Mandler, J. M. (1999). Early sensitivity to language-specific spatial categories in English and Korean. Cognitive Development, 14, 241-268. doi:10.1016/S0885-2014(99)00004-0.

    Abstract

    This study investigates young children’s comprehension of spatial terms in two languages that categorize space strikingly differently. English makes a distinction between actions resulting in containment (put in) versus support or surface attachment (put on), while Korean makes a cross-cutting distinction between tight-fit relations (kkita) versus loose-fit or other contact relations (various verbs). In particular, the Korean verb kkita refers to actions resulting in a tight-fit relation regardless of containment or support. In a preferential looking study we assessed the comprehension of in by 20 English learners and kkita by 10 Korean learners, all between 18 and 23 months. The children viewed pairs of scenes while listening to sentences with and without the target word. The target word led children to gaze at different and language-appropriate aspects of the scenes. We conclude that children are sensitive to language-specific spatial categories by 18–23 months.
  • Cholin, J., & Levelt, W. J. M. (2009). Effects of syllable preparation and syllable frequency in speech production: Further evidence for syllabic units at a post-lexical level. Language and Cognitive Processes, 24, 662-684. doi:10.1080/01690960802348852.

    Abstract

    In the current paper, we asked at what level in the speech planning process speakers retrieve stored syllables. There is evidence that syllable structure plays an essential role in the phonological encoding of words (e.g., online syllabification and phonological word formation). There is also evidence that syllables are retrieved as whole units. However, findings that clearly pinpoint these effects to specific levels in speech planning are scarce. We used a naming variant of the implicit priming paradigm to contrast voice onset latencies for frequency-manipulated disyllabic Dutch pseudo-words. While prior implicit priming studies only manipulated the item's form and/or syllable structure overlap we introduced syllable frequency as an additional factor. If the preparation effect for syllables obtained in the implicit priming paradigm proceeds beyond phonological planning, i.e., includes the retrieval of stored syllables, then the preparation effect should differ for high- and low frequency syllables. The findings reported here confirm this prediction: Low-frequency syllables benefit significantly more from the preparation than high-frequency syllables. Our findings support the notion of a mental syllabary at a post-lexical level, between the levels of phonological and phonetic encoding.
  • Cholin, J., Levelt, W. J. M., & Schiller, N. O. (2006). Effects of syllable frequency in speech production. Cognition, 99, 205-235. doi:10.1016/j.cognition.2005.01.009.

    Abstract

    In the speech production model proposed by [Levelt, W. J. M., Roelofs, A., Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, pp. 1-75.], syllables play a crucial role at the interface of phonological and phonetic encoding. At this interface, abstract phonological syllables are translated into phonetic syllables. It is assumed that this translation process is mediated by a so-called Mental Syllabary. Rather than constructing the motor programs for each syllable on-line, the mental syllabary is hypothesized to provide pre-compiled gestural scores for the articulators. In order to find evidence for such a repository, we investigated syllable-frequency effects: If the mental syllabary consists of retrievable representations corresponding to syllables, then the retrieval process should be sensitive to frequency differences. In a series of experiments using a symbol-position association learning task, we tested whether highfrequency syllables are retrieved and produced faster compared to low-frequency syllables. We found significant syllable frequency effects with monosyllabic pseudo-words and disyllabic pseudo-words in which the first syllable bore the frequency manipulation; no effect was found when the frequency manipulation was on the second syllable. The implications of these results for the theory of word form encoding at the interface of phonological and phonetic encoding; especially with respect to the access mechanisms to the mental syllabary in the speech production model by (Levelt et al.) are discussed.
  • Chu, M., & Kita, S. (2009). Co-speech gestures do not originate from speech production processes: Evidence from the relationship between co-thought and co-speech gestures. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society (pp. 591-595). Austin, TX: Cognitive Science Society.

    Abstract

    When we speak, we spontaneously produce gestures (co-speech gestures). Co-speech gestures and speech production are closely interlinked. However, the exact nature of the link is still under debate. To addressed the question that whether co-speech gestures originate from the speech production system or from a system independent of the speech production, the present study examined the relationship between co-speech and co-thought gestures. Co-thought gestures, produced during silent thinking without speaking, presumably originate from a system independent of the speech production processes. We found a positive correlation between the production frequency of co-thought and co-speech gestures, regardless the communicative function that co-speech gestures might serve. Therefore, we suggest that co-speech gestures and co-thought gestures originate from a common system that is independent of the speech production processes
  • Clifton, Jr., C., Cutler, A., McQueen, J. M., & Van Ooijen, B. (1999). The processing of inflected forms. [Commentary on H. Clahsen: Lexical entries and rules of language.]. Behavioral and Brain Sciences, 22, 1018-1019.

    Abstract

    Clashen proposes two distinct processing routes, for regularly and irregularly inflected forms, respectively, and thus is apparently making a psychological claim. We argue his position, which embodies a strictly linguistic perspective, does not constitute a psychological processing model.
  • Collins, L. J., & Chen, X. S. (2009). Ancestral RNA: The RNA biology of the eukaryotic ancestor. RNA Biology, 6(5), 495-502. doi:10.4161/rna.6.5.9551.

    Abstract

    Our knowledge of RNA biology within eukaryotes has exploded over the last five years. Within new research we see that some features that were once thought to be part of multicellular life have now been identified in several protist lineages. Hence, it is timely to ask which features of eukaryote RNA biology are ancestral to all eukaryotes. We focus on RNA-based regulation and epigenetic mechanisms that use small regulatory ncRNAs and long ncRNAs, to highlight some of the many questions surrounding eukaryotic ncRNA evolution.
  • Cox, S., Rösler, D., & Skiba, R. (1989). A tailor-made database for language teaching material. Literary & Linguistic Computing, 4(4), 260-264.
  • Crasborn, O., Sloetjes, H., Auer, E., & Wittenburg, P. (2006). Combining video and numeric data in the analysis of sign languages with the ELAN annotation software. In C. Vetoori (Ed.), Proceedings of the 2nd Workshop on the Representation and Processing of Sign languages: Lexicographic matters and didactic scenarios (pp. 82-87). Paris: ELRA.

    Abstract

    This paper describes hardware and software that can be used for the phonetic study of sign languages. The field of sign language phonetics is characterised, and the hardware that is currently in use is described. The paper focuses on the software that was developed to enable the recording of finger and hand movement data, and the additions to the ELAN annotation software that facilitate the further visualisation and analysis of the data.
  • Cronin, K. A., Schroeder, K. K. E., Rothwell, E. S., Silk, J. B., & Snowdon, C. T. (2009). Cooperatively breeding cottontop tamarins (Saguinus oedipus) do not donate rewards to their long-term mates. Journal of Comparative Psychology, 123(3), 231-241. doi:10.1037/a0015094.

    Abstract

    This study tested the hypothesis that cooperative breeding facilitates the emergence of prosocial behavior by presenting cottontop tamarins (Saguinus oedipus) with the option to provide food rewards to pair-bonded mates. In Experiment 1, tamarins could provide rewards to mates at no additional cost while obtaining rewards for themselves. Contrary to the hypothesis, tamarins did not demonstrate a preference to donate rewards, behaving similar to chimpanzees in previous studies. In Experiment 2, the authors eliminated rewards for the donor for a stricter test of prosocial behavior, while reducing separation distress and food preoccupation. Again, the authors found no evidence for a donation preference. Furthermore, tamarins were significantly less likely to deliver rewards to mates when the mate displayed interest in the reward. The results of this study contrast with those recently reported for cooperatively breeding common marmosets, and indicate that prosocial preferences in a food donation task do not emerge in all cooperative breeders. In previous studies, cottontop tamarins have cooperated and reciprocated to obtain food rewards; the current findings sharpen understanding of the boundaries of cottontop tamarins’ food-provisioning behavior.
  • Cronin, K. A., Mitchell, M. A., Lonsdorf, E. V., & Thompson, S. D. (2006). One year later: Evaluation of PMC-Recommended births and transfers. Zoo Biology, 25, 267-277. doi:10.1002/zoo.20100.

    Abstract

    To meet their exhibition, conservation, education, and scientific goals, members of the American Zoo and Aquarium Association (AZA) collaborate to manage their living collections as single species populations. These cooperative population management programs, Species Survival Planss (SSP) and Population Management Plans (PMP), issue specimen-by-specimen recommendations aimed at perpetuating captive populations by maintaining genetic diversity and demographic stability. Species Survival Plans and PMPs differ in that SSP participants agree to complete recommendations, whereas PMP participants need only take recommendations under advisement. We evaluated the effect of program type and the number of participating institutions on the success of actions recommended by the Population Management Center (PMC): transfers of specimens between institutions, breeding, and target number of offspring. We analyzed AZA studbook databases for the occurrence of recommended or unrecommended transfers and births during the 1-year period after the distribution of standard AZA Breeding-and-Transfer Plans. We had three major findings: 1) on average, both SSPs and PMPs fell about 25% short of their target; however, as the number of participating institutions increased so too did the likelihood that programs met or exceeded their target; 2) SSPs exhibited significantly greater transfer success than PMPs, although transfer success for both program types was below 50%; and 3) SSPs exhibited significantly greater breeding success than PMPs, although breeding success for both program types was below 20%. Together, these results indicate that the science and sophistication behind genetic and demographic management of captive populations may be compromised by the challenges of implementation.
  • Cutler, A., Kim, J., & Otake, T. (2006). On the limits of L1 influence on non-L1 listening: Evidence from Japanese perception of Korean. In P. Warren, & C. I. Watson (Eds.), Proceedings of the 11th Australian International Conference on Speech Science & Technology (pp. 106-111).

    Abstract

    Language-specific procedures which are efficient for listening to the L1 may be applied to non-native spoken input, often to the detriment of successful listening. However, such misapplications of L1-based listening do not always happen. We propose, based on the results from two experiments in which Japanese listeners detected target sequences in spoken Korean, that an L1 procedure is only triggered if requisite L1 features are present in the input.
  • Cutler, A., & Pasveer, D. (2006). Explaining cross-linguistic differences in effects of lexical stress on spoken-word recognition. In R. Hoffmann, & H. Mixdorff (Eds.), Speech Prosody 2006. Dresden: TUD press.

    Abstract

    Experiments have revealed differences across languages in listeners’ use of stress information in recognising spoken words. Previous comparisons of the vocabulary of Spanish and English had suggested that the explanation of this asymmetry might lie in the extent to which considering stress in spokenword recognition allows rejection of unwanted competition from words embedded in other words. This hypothesis was tested on the vocabularies of Dutch and German, for which word recognition results resemble those from Spanish more than those from English. The vocabulary statistics likewise revealed that in each language, the reduction of embeddings resulting from taking stress into account is more similar to the reduction achieved in Spanish than in English.
  • Cutler, A., Eisner, F., McQueen, J. M., & Norris, D. (2006). Coping with speaker-related variation via abstract phonemic categories. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 31-32).
  • Cutler, A., Weber, A., & Otake, T. (2006). Asymmetric mapping from phonetic to lexical representations in second-language listening. Journal of Phonetics, 34(2), 269-284. doi:10.1016/j.wocn.2005.06.002.

    Abstract

    The mapping of phonetic information to lexical representations in second-language (L2) listening was examined using an eyetracking paradigm. Japanese listeners followed instructions in English to click on pictures in a display. When instructed to click on a picture of a rocket, they experienced interference when a picture of a locker was present, that is, they tended to look at the locker instead. However, when instructed to click on the locker, they were unlikely to look at the rocket. This asymmetry is consistent with a similar asymmetry previously observed in Dutch listeners’ mapping of English vowel contrasts to lexical representations. The results suggest that L2 listeners may maintain a distinction between two phonetic categories of the L2 in their lexical representations, even though their phonetic processing is incapable of delivering the perceptual discrimination required for correct mapping to the lexical distinction. At the phonetic processing level, one of the L2 categories is dominant; the present results suggest that dominance is determined by acoustic–phonetic proximity to the nearest L1 category. At the lexical processing level, representations containing this dominant category are more likely than representations containing the non-dominant category to be correctly contacted by the phonetic input.
  • Cutler, A., Howard, D., & Patterson, K. E. (1989). Misplaced stress on prosody: A reply to Black and Byng. Cognitive Neuropsychology, 6, 67-83.

    Abstract

    The recent claim by Black and Byng (1986) that lexical access in reading is subject to prosodic constraints is examined and found to be unsupported. The evidence from impaired reading which Black and Byng report is based on poorly controlled stimulus materials and is inadequately analysed and reported. An alternative explanation of their findings is proposed, and new data are reported for which this alternative explanation can account but their model cannot. Finally, their proposal is shown to be theoretically unmotivated and in conflict with evidence from normal reading.
  • Cutler, A. (2009). Greater sensitivity to prosodic goodness in non-native than in native listeners. Journal of the Acoustical Society of America, 125, 3522-3525. doi:10.1121/1.3117434.

    Abstract

    English listeners largely disregard suprasegmental cues to stress in recognizing words. Evidence for this includes the demonstration of Fear et al. [J. Acoust. Soc. Am. 97, 1893–1904 (1995)] that cross-splicings are tolerated between stressed and unstressed full vowels (e.g., au- of autumn, automata). Dutch listeners, however, do exploit suprasegmental stress cues in recognizing native-language words. In this study, Dutch listeners were presented with English materials from the study of Fear et al. Acceptability ratings by these listeners revealed sensitivity to suprasegmental mismatch, in particular, in replacements of unstressed full vowels by higher-stressed vowels, thus evincing greater sensitivity to prosodic goodness than had been shown by the original native listener group.
  • Cutler, A. (1976). High-stress words are easier to perceive than low-stress words, even when they are equally stressed. Texas Linguistic Forum, 2, 53-57.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1988). Limits on bilingualism [Letters to Nature]. Nature, 340, 229-230. doi:10.1038/340229a0.

    Abstract

    SPEECH, in any language, is continuous; speakers provide few reliable cues to the boundaries of words, phrases, or other meaningful units. To understand speech, listeners must divide the continuous speech stream into portions that correspond to such units. This segmentation process is so basic to human language comprehension that psycholinguists long assumed that all speakers would do it in the same way. In previous research1,2, however, we reported that segmentation routines can be language-specific: speakers of French process spoken words syllable by syllable, but speakers of English do not. French has relatively clear syllable boundaries and syllable-based timing patterns, whereas English has relatively unclear syllable boundaries and stress-based timing; thus syllabic segmentation would work more efficiently in the comprehension of French than in the comprehension of English. Our present study suggests that at this level of language processing, there are limits to bilingualism: a bilingual speaker has one and only one basic language.
  • Cutler, A., & Butterfield, S. (1989). Natural speech cues to word segmentation under difficult listening conditions. In J. Tubach, & J. Mariani (Eds.), Proceedings of Eurospeech 89: European Conference on Speech Communication and Technology: Vol. 2 (pp. 372-375). Edinburgh: CEP Consultants.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In three experiments, we examined how word boundaries are produced in deliberately clear speech. We found that speakers do indeed attempt to mark word boundaries; moreover, they differentiate between word boundaries in a way which suggests they are sensitive to listener needs. Application of heuristic segmentation strategies makes word boundaries before strong syllables easiest for listeners to perceive; but under difficult listening conditions speakers pay more attention to marking word boundaries before weak syllables, i.e. they mark those boundaries which are otherwise particularly hard to perceive.
  • Cutler, A., Davis, C., & Kim, J. (2009). Non-automaticity of use of orthographic knowledge in phoneme evaluation. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 380-383). Causal Productions Pty Ltd.

    Abstract

    Two phoneme goodness rating experiments addressed the role of orthographic knowledge in the evaluation of speech sounds. Ratings for the best tokens of /s/ were higher in words spelled with S (e.g., bless) than in words where /s/ was spelled with C (e.g., voice). This difference did not appear for analogous nonwords for which every lexical neighbour had either S or C spelling (pless, floice). Models of phonemic processing incorporating obligatory influence of lexical information in phonemic processing cannot explain this dissociation; the data are consistent with models in which phonemic decisions are not subject to necessary top-down lexical influence.
  • Cutler, A. (1976). Phoneme-monitoring reaction time as a function of preceding intonation contour. Perception and Psychophysics, 20, 55-60. Retrieved from http://www.psychonomic.org/search/view.cgi?id=18194.

    Abstract

    An acoustically invariant one-word segment occurred in two versions of one syntactic context. In one version, the preceding intonation contour indicated that a stress would fall at the point where this word occurred. In the other version, the preceding contour predicted reduced stress at that point. Reaction time to the initial phoneme of the word was faster in the former case, despite the fact that no acoustic correlates of stress were present. It is concluded that a part of the sentence comprehension process is the prediction of upcoming sentence accents.
  • Cutler, A. (1996). The comparative study of spoken-language processing. In H. T. Bunnell (Ed.), Proceedings of the Fourth International Conference on Spoken Language Processing: Vol. 1 (pp. 1). New York: Institute of Electrical and Electronics Engineers.

    Abstract

    Psycholinguists are saddled with a paradox. Their aim is to construct a model of human language processing, which will hold equally well for the processing of any language, but this aim cannot be achieved just by doing experiments in any language. They have to compare processing of many languages, and actively search for effects which are specific to a single language, even though a model which is itself specific to a single language is really the last thing they want.
  • Cutler, A., & Norris, D. (1999). Sharpening Ockham’s razor (Commentary on W.J.M. Levelt, A. Roelofs & A.S. Meyer: A theory of lexical access in speech production). Behavioral and Brain Sciences, 22, 40-41.

    Abstract

    Language production and comprehension are intimately interrelated; and models of production and comprehension should, we argue, be constrained by common architectural guidelines. Levelt et al.'s target article adopts as guiding principle Ockham's razor: the best model of production is the simplest one. We recommend adoption of the same principle in comprehension, with consequent simplification of some well-known types of models.
  • Cutler, A., Van Ooijen, B., Norris, D., & Sanchez-Casas, R. (1996). Speeded detection of vowels: A cross-linguistic study. Perception and Psychophysics, 58, 807-822. Retrieved from http://www.psychonomic.org/search/view.cgi?id=430.

    Abstract

    In four experiments, listeners’ response times to detect vowel targets in spoken input were measured. The first three experiments were conducted in English. In two, one using real words and the other, nonwords, detection accuracy was low, targets in initial syllables were detected more slowly than targets in final syllables, and both response time and missed-response rate were inversely correlated with vowel duration. In a third experiment, the speech context for some subjects included all English vowels, while for others, only five relatively distinct vowels occurred. This manipulation had essentially no effect, and the same response pattern was again observed. A fourth experiment, conducted in Spanish, replicated the results in the first three experiments, except that miss rate was here unrelated to vowel duration. We propose that listeners’ responses to vowel targets in naturally spoken input are effectively cautious, reflecting realistic appreciation of vowel variability in natural context.
  • Cutler, A. (1989). Straw modules [Commentary/Massaro: Speech perception]. Behavioral and Brain Sciences, 12, 760-762.
  • Cutler, A., & Otake, T. (1999). Pitch accent in spoken-word recognition in Japanese. Journal of the Acoustical Society of America, 105, 1877-1888.

    Abstract

    Three experiments addressed the question of whether pitch-accent information may be exploited in the process of recognizing spoken words in Tokyo Japanese. In a two-choice classification task, listeners judged from which of two words, differing in accentual structure, isolated syllables had been extracted ~e.g., ka from baka HL or gaka LH!; most judgments were correct, and listeners’ decisions were correlated with the fundamental frequency characteristics of the syllables. In a gating experiment, listeners heard initial fragments of words and guessed what the words were; their guesses overwhelmingly had the same initial accent structure as the gated word even when only the beginning CV of the stimulus ~e.g., na- from nagasa HLL or nagashi LHH! was presented. In addition, listeners were more confident in guesses with the same initial accent structure as the stimulus than in guesses with different accent. In a lexical decision experiment, responses to spoken words ~e.g., ame HL! were speeded by previous presentation of the same word ~e.g., ame HL! but not by previous presentation of a word differing only in accent ~e.g., ame LH!. Together these findings provide strong evidence that accentual information constrains the activation and selection of candidates for spoken-word recognition.
  • Cutler, A. (1989). The new Victorians. New Scientist, (1663), 66.
  • Cutler, A., & Otake, T. (1996). The processing of word prosody in Japanese. In P. McCormack, & A. Russell (Eds.), Proceedings of the 6th Australian International Conference on Speech Science and Technology (pp. 599-604). Canberra: Australian Speech Science and Technology Association.
  • Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14, 113-121. doi:10.1037/0096-1523.14.1.113.

    Abstract

    A model of speech segmentation in a stress language is proposed, according to which the occurrence of a strong syllable triggers segmentation of the speech signal, whereas occurrence of a weak syllable does not trigger segmentation. We report experiments in which listeners detected words embedded in nonsense bisyllables more slowly when the bisyllable had two strong syllables than when it had a strong and a weak syllable; mint was detected more slowly in mintayve than in mintesh. According to our proposed model, this result is an effect of segmentation: When the second syllable is strong, it is segmented from the first syllable, and successful detection of the embedded word therefore requires assembly of speech material across a segmentation position. Speech recognition models involving phonemic or syllabic recoding, or based on strictly left-to-right processes, do not predict this result. It is argued that segmentation at strong syllables in continuous speech recognition serves the purpose of detecting the most efficient locations at which to initiate lexical access. (C) 1988 by the American Psychological Association

Share this page