Publications

Displaying 1 - 100 of 585
  • Abdel Rahman, R., Van Turennout, M., & Levelt, W. J. M. (2003). Phonological encoding is not contingent on semantic feature retrieval: An electrophysiological study on object naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(5), 850-860. doi:10.1037/0278-7393.29.5.850.

    Abstract

    In the present study, the authors examined with event-related brain potentials whether phonological encoding in picture naming is mediated by basic semantic feature retrieval or proceeds independently. In a manual 2-choice go/no-go task the choice response depended on a semantic classification (animal vs. object) and the execution decision was contingent on a classification of name phonology (vowel vs. consonant). The introduction of a semantic task mixing procedure allowed for selectively manipulating the speed of semantic feature retrieval. Serial and parallel models were tested on the basis of their differential predictions for the effect of this manipulation on the lateralized readiness potential and N200 component. The findings indicate that phonological code retrieval is not strictly contingent on prior basic semantic feature processing.
  • Abdel Rahman, R., & Sommer, W. (2003). Does phonological encoding in speech production always follow the retrieval of semantic knowledge?: Electrophysiological evidence for parallel processing. Cognitive Brain Research, 16(3), 372-382. doi:10.1016/S0926-6410(02)00305-1.

    Abstract

    In this article a new approach to the distinction between serial/contingent and parallel/independent processing in the human cognitive system is applied to semantic knowledge retrieval and phonological encoding of the word form in picture naming. In two-choice go/nogo tasks pictures of objects were manually classified on the basis of semantic and phonological information. An additional manipulation of the duration of the faster and presumably mediating process (semantic retrieval) allowed to derive differential predictions from the two alternative models. These predictions were tested with two event-related brain potentials (ERPs), the lateralized readiness potential (LRP) and the N200. The findings indicate that phonological encoding can proceed in parallel to the retrieval of semantic features. A suggestion is made how to accommodate these findings with models of speech production.
  • Acheson, D. J., & MacDonald, M. C. (2009). Twisting tongues and memories: Explorations of the relationship between language production and verbal working memory. Journal of Memory and Language, 60(3), 329-350. doi:10.1016/j.jml.2008.12.002.

    Abstract

    Many accounts of working memory posit specialized storage mechanisms for the maintenance of serial order. We explore an alternative, that maintenance is achieved through temporary activation in the language production architecture. Four experiments examined the extent to which the phonological similarity effect can be explained as a sublexical speech error. Phonologically similar nonword stimuli were ordered to create tongue twister or control materials used in four tasks: reading aloud, immediate spoken recall, immediate typed recall, and serial recognition. Dependent measures from working memory (recall accuracy) and language production (speech errors) fields were used. Even though lists were identical except for item order, robust effects of tongue twisters were observed. Speech error analyses showed that errors were better described as phoneme rather than item ordering errors. The distribution of speech errors was comparable across all experiments and exhibited syllable-position effects, suggesting an important role for production processes. Implications for working memory and language production are discussed.
  • Acheson, D. J., & MacDonald, M. C. (2009). Verbal working memory and language production: Common approaches to the serial ordering of verbal information. Psychological Bulletin, 135(1), 50-68. doi:10.1037/a0014411.

    Abstract

    Verbal working memory (WM) tasks typically involve the language production architecture for recall; however, language production processes have had a minimal role in theorizing about WM. A framework for understanding verbal WM results is presented here. In this framework, domain-specific mechanisms for serial ordering in verbal WM are provided by the language production architecture, in which positional, lexical, and phonological similarity constraints are highly similar to those identified in the WM literature. These behavioral similarities are paralleled in computational modeling of serial ordering in both fields. The role of long-term learning in serial ordering performance is emphasized, in contrast to some models of verbal WM. Classic WM findings are discussed in terms of the language production architecture. The integration of principles from both fields illuminates the maintenance and ordering mechanisms for verbal information.
  • Adank, P., Smits, R., & Van Hout, R. (2003). Modeling perceived vowel height, advancement, and rounding. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 647-650). Adelaide: Causal Productions.
  • Adank, P., & Janse, E. (2009). Perceptual learning of time-compressed and natural fast speech. Journal of the Acoustical Society of America, 126(5), 2649-2659. doi:10.1121/1.3216914.

    Abstract

    Speakers vary their speech rate considerably during a conversation, and listeners are able to quickly adapt to these variations in speech rate. Adaptation to fast speech rates is usually measured using artificially time-compressed speech. This study examined adaptation to two types of fast speech: artificially time-compressed speech and natural fast speech. Listeners performed a speeded sentence verification task on three series of sentences: normal-speed sentences, time-compressed sentences, and natural fast sentences. Listeners were divided into two groups to evaluate the possibility of transfer of learning between the time-compressed and natural fast conditions. The first group verified the natural fast before the time-compressed sentences, while the second verified the time-compressed before the natural fast sentences. The results showed transfer of learning when the time-compressed sentences preceded the natural fast sentences, but not when natural fast sentences preceded the time-compressed sentences. The results are discussed in the framework of theories on perceptual learning. Second, listeners show adaptation to the natural fast sentences, but performance for this type of fast speech does not improve to the level of time-compressed sentences.
  • Akker, E., & Cutler, A. (2003). Prosodic cues to semantic structure in native and nonnative listening. Bilingualism: Language and Cognition, 6(2), 81-96. doi:10.1017/S1366728903001056.

    Abstract

    Listeners efficiently exploit sentence prosody to direct attention to words bearing sentence accent. This effect has been explained as a search for focus, furthering rapid apprehension of semantic structure. A first experiment supported this explanation: English listeners detected phoneme targets in sentences more rapidly when the target-bearing words were in accented position or in focussed position, but the two effects interacted, consistent with the claim that the effects serve a common cause. In a second experiment a similar asymmetry was observed with Dutch listeners and Dutch sentences. In a third and a fourth experiment, proficient Dutch users of English heard English sentences; here, however, the two effects did not interact. The results suggest that less efficient mapping of prosody to semantics may be one way in which nonnative listening fails to equal native listening.
  • Alario, F.-X., Schiller, N. O., Domoto-Reilly, K., & Caramazza, A. (2003). The role of phonological and orthographic information in lexical selection. Brain and Language, 84(3), 372-398. doi:10.1016/S0093-934X(02)00556-4.

    Abstract

    We report the performance of two patients with lexico-semantic deficits following left MCA CVA. Both patients produce similar numbers of semantic paraphasias in naming tasks, but presented one crucial difference: grapheme-to-phoneme and phoneme-to-grapheme conversion procedures were available only to one of them. We investigated the impact of this availability on the process of lexical selection during word production. The patient for whom conversion procedures were not operational produced semantic errors in transcoding tasks such as reading and writing to dictation; furthermore, when asked to name a given picture in multiple output modalities—e.g., to say the name of a picture and immediately after to write it down—he produced lexically inconsistent responses. By contrast, the patient for whom conversion procedures were available did not produce semantic errors in transcoding tasks and did not produce lexically inconsistent responses in multiple picture-naming tasks. These observations are interpreted in the context of the summation hypothesis (Hillis & Caramazza, 1991), according to which the activation of lexical entries for production would be made on the basis of semantic information and, when available, on the basis of form-specific information. The implementation of this hypothesis in models of lexical access is discussed in detail.
  • Aleman, A., Formisano, E., Koppenhagen, H., Hagoort, P., De Haan, E. H. F., & Kahn, R. S. (2005). The functional neuroanatomy of metrical stress evaluation of perceived and imagined spoken words. Cerebral Cortex, 15(2), 221-228. doi:10.1093/cercor/bhh124.

    Abstract

    We hypothesized that areas in the temporal lobe that have been implicated in the phonological processing of spoken words would also be activated during the generation and phonological processing of imagined speech. We tested this hypothesis using functional magnetic resonance imaging during a behaviorally controlled task of metrical stress evaluation. Subjects were presented with bisyllabic words and had to determine the alternation of strong and weak syllables. Thus, they were required to discriminate between weak-initial words and strong-initial words. In one condition, the stimuli were presented auditorily to the subjects (by headphones). In the other condition the stimuli were presented visually on a screen and subjects were asked to imagine hearing the word. Results showed activation of the supplementary motor area, inferior frontal gyrus (Broca's area) and insula in both conditions. In the superior temporal gyrus (STG) and in the superior temporal sulcus (STS) strong activation was observed during the auditory (perceptual) condition. However, a region located in the posterior part of the STS/STG also responded during the imagery condition. No activation of this same region of the STS was observed during a control condition which also involved processing of visually presented words, but which required a semantic decision from the subject. We suggest that processing of metrical stress, with or without auditory input, relies in part on cortical interface systems located in the posterior part of STS/STG. These results corroborate behavioral evidence regarding phonological loop involvement in auditory–verbal imagery.
  • Allerhand, M., Butterfield, S., Cutler, A., & Patterson, R. (1992). Assessing syllable strength via an auditory model. In Proceedings of the Institute of Acoustics: Vol. 14 Part 6 (pp. 297-304). St. Albans, Herts: Institute of Acoustics.
  • Ambridge, B., Pine, J. M., Rowland, C. F., Jones, R. L., & Clark, V. (2009). A Semantics-Based Approach to the “no negative evidence” problem. Cognitive Science, 33(7), 1301-1316. doi:10.1111/j.1551-6709.2009.01055.x.

    Abstract

    Previous studies have shown that children retreat from argument-structure overgeneralization errors (e.g., *Don’t giggle me) by inferring that frequently encountered verbs are unlikely to be grammatical in unattested constructions, and by making use of syntax-semantics correspondences (e.g., verbs denoting internally caused actions such as giggling cannot normally be used causatively). The present study tested a new account based on a unitary learning mechanism that combines both of these processes. Seventy-two participants (ages 5–6, 9–10, and adults) rated overgeneralization errors with higher (*The funny man’s joke giggled Bart) and lower (*The funny man giggled Bart) degrees of direct external causation. The errors with more-direct causation were rated as less unacceptable than those with less-direct causation. This finding is consistent with the new account, under which children acquire—in an incremental and probabilistic fashion—the meaning of particular constructions (e.g., transitive causative = direct external causation) and particular verbs, rejecting generalizations where the incompatibility between the two is too great.
  • Ambridge, B., & Rowland, C. F. (2009). Predicting children's errors with negative questions: Testing a schema-combination account. Cognitive Linguistics, 20(2), 225-266. doi:10.1515/COGL.2009.014.

    Abstract

    Positive and negative what, why and yes/no questions with the 3sg auxiliaries can and does were elicited from 50 children aged 3;3–4;3. In support of the constructivist “schema-combination” account, only children who produced a particular positive question type correctly (e.g., What does she want?) produced a characteristic “auxiliary-doubling” error (e.g., *What does she doesn't want?) for the corresponding negative question type. This suggests that these errors are formed by superimposing a positive question frame (e.g., What does THING PROCESS?) and an inappropriate negative frame (e.g., She doesn't PROCESS) learned from declarative utterances. In addition, a significant correlation between input frequency and correct production was observed for 11 of the 12 lexical frames (e.g., What does THING PROCESS?), although some negative question types showed higher rates of error than one might expect based on input frequency alone. Implications for constructivist and generativist theories of question-acquisition are discussed.
  • Ameka, F. K. (1992). Interjections: The universal yet neglected part of speech. Journal of Pragmatics, 18(2/3), 101-118. doi:10.1016/0378-2166(92)90048-G.
  • Ameka, F. K. (1992). The meaning of phatic and conative interjections. Journal of Pragmatics, 18(2/3), 245-271. doi:10.1016/0378-2166(92)90054-F.

    Abstract

    The purpose of this paper is to investigate the meanings of the members of two subclasses of interjections in Ewe: the conative/volitive which are directed at an auditor, and the phatic which are used in the maintenance of social and communicative contact. It is demonstrated that interjections like other linguistic signs have meanings which can be rigorously stated. In addition, the paper explores the differences and similarities between the semantic structures of interjections on one hand and formulaic words on the other. This is done through a comparison of the semantics and pragmatics of an interjection and a formulaic word which are used for welcoming people in Ewe. It is contended that formulaic words are speech acts qua speech acts while interjections are not fully fledged speech acts because they lack illocutionary dictum in their semantic structure.
  • Ameka, F. K. (2009). Verb extensions in Likpe (Sɛkpɛlé). Journal of West African Languages, 36(1/2), 139-157.
  • Andrieu, C., Figuerola, H., Jacquemot, E., Le Guen, O., Roullet, J., & Salès, C. (2005). Parfum de rose, odeur de sainteté: Un sermon Tzeltal sur la première sainte des Amériques. Ateliers du LESC, 29, 11-67. Retrieved from http://ateliers.revues.org/document174.html.
  • Araújo, S., Faísca, L., Petersson, K. M., & Reis, A. (2009). Cognitive profiles in Portuguese children with dyslexia. In Abstracts presented at the International Neuropsychological Society, Finnish Neuropsychological Society, Joint Mid-Year Meeting July 29-August 1, 2009. Helsinki, Finland & Tallinn, Estonia (pp. 23). Retrieved from http://www.neuropsykologia.fi/ins2009/INS_MY09_Abstract.pdf.
  • Araújo, S., Faísca, L., Petersson, K. M., & Reis, A. (2009). Visual processing factors contribute to object naming difficulties in dyslexic readers. In Abstracts presented at the International Neuropsychological Society, Finnish Neuropsychological Society, Joint Mid-Year Meeting July 29-August 1, 2009. Helsinki, Finland & Tallinn, Estonia (pp. 39). Retrieved from http://www.neuropsykologia.fi/ins2009/INS_MY09_Abstract.pdf.
  • Baayen, R. H., & Moscoso del Prado Martín, F. (2005). Semantic density and past-tense formation in three Germanic languages. Language, 81(3), 666-698. doi:10.1353/lan.2005.0112.

    Abstract

    it is widely believed that the difference between regular and irregular verbs is restricted to form. This study questions that belief. We report a series of lexical statistics showing that irregular verbs cluster in denser regions in semantic space. Compared to regular verbs, irregular verbs tend to have more semantic neighbors that in turn have relatively many other semantic neighbors that are morphologically irregular. We show that this greater semantic density for irregulars is reflected in association norms, familiarity ratings, visual lexical-decision latencies, and word-naming latencies. Meta-analyses of the materials of two neuroimaging studies show that in these studies, regularity is confounded with differences in semantic density. Our results challenge the hypothesis of the supposed formal encapsulation of rules of inflection and support lines of research in which sensitivity to probability is recognized as intrinsic to human language.
  • Bastiaanse, R., De Goede, D., & Love, T. (2009). Auditory sentence processing: An introduction. Journal of Psycholinguistic Research, 38(3), 177-179. doi:10.1007/s10936-009-9109-3.
  • Bastiaansen, M. C. M., Van der Linden, M., Ter Keurs, M., Dijkstra, T., & Hagoort, P. (2005). Theta responses are involved in lexico-semantic retrieval during language processing. Journal of Cognitive Neuroscience, 17, 530-541. doi:10.1162/0898929053279469.

    Abstract

    Oscillatory neuronal dynamics, observed in the human electroencephalogram (EEG) during language processing, have been related to the dynamic formation of functionally coherent networks that serve the role of integrating the different sources of information needed for understanding the linguistic input. To further explore the functional role of oscillatory synchrony during language processing, we quantified event-related EEG power changes induced by the presentation of open-class (OC) words and closed-class (CC) words in a wide range of frequencies (from 1 to 30 Hz), while subjects read a short story. Word presentation induced three oscillatory components: a theta power increase (4–7 Hz), an alpha power decrease (10–12 Hz), and a beta power decrease (16–21 Hz). Whereas the alpha and beta responses showed mainly quantitative differences between the two word classes, the theta responses showed qualitative differences between OC words and CC words: A theta power increase was found over left temporal areas for OC words, but not for CC words. The left temporal theta increase may index the activation of a network involved in retrieving the lexical–semantic properties of the OC items.
  • Bastiaansen, M. C. M., & Hagoort, P. (2003). Event-induced theta responses as a window on the dynamics of memory. Cortex, 39(4-5), 967-972. doi:10.1016/S0010-9452(08)70873-6.

    Abstract

    An important, but often ignored distinction in the analysis of EEG signals is that between evoked activity and induced activity. Whereas evoked activity reflects the summation of transient post-synaptic potentials triggered by an event, induced activity, which is mainly oscillatory in nature, is thought to reflect changes in parameters controlling dynamic interactions within and between brain structures. We hypothesize that induced activity may yield information about the dynamics of cell assembly formation, activation and subsequent uncoupling, which may play a prominent role in different types of memory operations. We then describe a number of analysis tools that can be used to study the reactivity of induced rhythmic activity, both in terms of amplitude changes and of phase variability.

    We briefly discuss how alpha, gamma and theta rhythms are thought to be generated, paying special attention to the hypothesis that the theta rhythm reflects dynamic interactions between the hippocampal system and the neocortex. This hypothesis would imply that studying the reactivity of scalp-recorded theta may provide a window on the contribution of the hippocampus to memory functions.

    We review studies investigating the reactivity of scalp-recorded theta in paradigms engaging episodic memory, spatial memory and working memory. In addition, we review studies that relate theta reactivity to processes at the interface of memory and language. Despite many unknowns, the experimental evidence largely supports the hypothesis that theta activity plays a functional role in cell assembly formation, a process which may constitute the neural basis of memory formation and retrieval. The available data provide only highly indirect support for the hypothesis that scalp-recorded theta yields information about hippocampal functioning. It is concluded that studying induced rhythmic activity holds promise as an additional important way to study brain function.
  • Bauer, B. L. M. (2003). The adverbial formation in mente in Vulgar and Late Latin: A problem in grammaticalization. In H. Solin, M. Leiwo, & H. Hallo-aho (Eds.), Latin vulgaire, latin tardif VI (pp. 439-457). Hildesheim: Olms.
  • Belke, E., Brysbaert, M., Meyer, A. S., & Ghyselinck, M. (2005). Age of acquisition effects in picture naming: Evidence for a lexical-semantic competition hypothesis. Cognition, 96, B45-B54. doi:10.1016/j.cognition.2004.11.006.

    Abstract

    In many tasks the effects of frequency and age of acquisition (AoA) on reaction latencies are similar in size. However, in picture naming the AoA-effect is often significantly larger than expected on the basis of the frequency-effect. Previous explanations of this frequency-independent AoA-effect have attributed it to the organisation of the semantic system or to the way phonological word forms are stored in the mental lexicon. Using a semantic blocking paradigm, we show that semantic context effects on naming latencies are more pronounced for late-acquired than for early-acquired words. This interaction between AoA and naming context is likely to arise during lexical-semantic encoding, which we put forward as the locus for the frequency-independent AoA-effect.
  • Belke, E., Meyer, A. S., & Damian, M. F. (2005). Refractory effects in picture naming as assessed in a semantic blocking paradigm. The Quarterly Journal of Experimental Psychology Section A, 58, 667-692. doi:10.1080/02724980443000142.

    Abstract

    In the cyclic semantic blocking paradigm participants repeatedly name sets of objects with semantically related names (homogeneous sets) or unrelated names (heterogeneous sets). The naming latencies are typically longer in related than in unrelated sets. In we replicated this semantic blocking effect and demonstrated that the effect only arose after all objects of a set had been shown and named once. In , the objects of a set were presented simultaneously (instead of on successive trials). Evidence for semantic blocking was found in the naming latencies and in the gaze durations for the objects, which were longer in homogeneous than in heterogeneous sets. For the gaze-to-speech lag between the offset of gaze on an object and the onset of the articulation of its name, a repetition priming effect was obtained but no blocking effect. showed that the blocking effect for speech onset latencies generalized to new, previously unnamed lexical items. We propose that the blocking effect is due to refractory behaviour in the semantic system.
  • Bethard, S., Lai, V. T., & Martin, J. (2009). Topic model analysis of metaphor frequency for psycholinguistic stimuli. In Proceedings of the NAACL HLT Workshop on Computational Approaches to Linguistic Creativity, Boulder, Colorado, June 4, 2009 (pp. 9-16). Stroudsburg, PA: Association for Computational Linguistics.

    Abstract

    Psycholinguistic studies of metaphor processing must control their stimuli not just for word frequency but also for the frequency with which a term is used metaphorically. Thus, we consider the task of metaphor frequency estimation, which predicts how often target words will be used metaphorically. We develop metaphor classifiers which represent metaphorical domains through Latent Dirichlet Allocation, and apply these classifiers to the target words, aggregating their decisions to estimate the metaphorical frequencies. Training on only 400 sentences, our models are able to achieve 61.3 % accuracy on metaphor classification and 77.8 % accuracy on HIGH vs. LOW metaphorical frequency estimation.
  • Bien, H., Levelt, W. J. M., & Baayen, R. H. (2005). Frequency effects in compound production. Proceedings of the National Academy of Sciences of the United States of America, 102(49), 17876-17881.

    Abstract

    Four experiments investigated the role of frequency information in compound production by independently varying the frequencies of the first and second constituent as well as the frequency of the compound itself. Pairs of Dutch noun-noun compounds were selected such that there was a maximal contrast for one frequency while matching the other two frequencies. In a position-response association task, participants first learned to associate a compound with a visually marked position on a computer screen. In the test phase, participants had to produce the associated compound in response to the appearance of the position mark, and we measured speech onset latencies. The compound production latencies varied significantly according to factorial contrasts in the frequencies of both constituting morphemes but not according to a factorial contrast in compound frequency, providing further evidence for decompositional models of speech production. In a stepwise regression analysis of the joint data of Experiments 1-4, however, compound frequency was a significant nonlinear predictor, with facilitation in the low-frequency range and a trend toward inhibition in the high-frequency range. Furthermore, a combination of structural measures of constituent frequencies and entropies explained significantly more variance than a strict decompositional model, including cumulative root frequency as the only measure of constituent frequency, suggesting a role for paradigmatic relations in the mental lexicon.
  • Bock, K., Irwin, D. E., Davidson, D. J., & Levelt, W. J. M. (2003). Minding the clock. Journal of Memory and Language, 48, 653-685. doi:10.1016/S0749-596X(03)00007-X.

    Abstract

    Telling time is an exercise in coordinating language production with visual perception. By coupling different ways of saying times with different ways of seeing them, the performance of time-telling can be used to track cognitive transformations from visual to verbal information in connected speech. To accomplish this, we used eyetracking measures along with measures of speech timing during the production of time expressions. Our findings suggest that an effective interface between what has been seen and what is to be said can be constructed within 300 ms. This interface underpins a preverbal plan or message that appears to guide a comparatively slow, strongly incremental formulation of phrases. The results begin to trace the divide between seeing and saying -or thinking and speaking- that must be bridged during the creation of even the most prosaic utterances of a language.
  • Bohnemeyer, J. (2003). Invisible time lines in the fabric of events: Temporal coherence in Yukatek narratives. Journal of Linguistic Anthropology, 13(2), 139-162. doi:10.1525/jlin.2003.13.2.139.

    Abstract

    This article examines how narratives are structured in a language in which event order is largely not coded. Yucatec Maya lacks both tense inflections and temporal connectives corresponding to English after and before. It is shown that the coding of events in Yucatec narratives is subject to a strict iconicity constraint within paragraph boundaries. Aspectual viewpoint shifting is used to reconcile iconicity preservation with the requirements of a more flexible narrative structure.
  • Bonte, M. L., Mitterer, H., Zellagui, N., Poelmans, H., & Blomert, L. (2005). Auditory cortical tuning to statistical regularities in phonology. Clinical Neurophysiology, 16(12), 2765-2774. doi:10.1016/j.clinph.2005.08.012.

    Abstract

    Objective: Ample behavioral evidence suggests that distributional properties of the language environment influence the processing of speech. Yet, how these characteristics are reflected in neural processes remains largely unknown. The present ERP study investigates neurophysiological correlates of phonotactic probability: the distributional frequency of phoneme combinations. Methods: We employed an ERP measure indicative of experience-dependent auditory memory traces, the mismatch negativity (MMN). We presented pairs of non-words that differed by the degree of phonotactic probability in a codified passive oddball design that minimizes the contribution of acoustic processes. Results: In Experiment 1 the non-word with high phonotactic probability (notsel) elicited a significantly enhanced MMN as compared to the non-word with low phonotactic probability (notkel). In Experiment 2 this finding was replicated with a non-word pair with a smaller acoustic difference (notsel–notfel). An MMN enhancement was not observed in a third acoustic control experiment with stimuli having comparable phonotactic probability (so–fo). Conclusions: Our data suggest that auditory cortical responses to phoneme clusters are modulated by statistical regularities of phoneme combinations. Significance: This study indicates that the language environment is relevant in shaping the neural processing of speech. Furthermore, it provides a potentially useful design for investigating implicit phonological processing in children with anomalous language functions like dyslexia.
  • Borgwaldt, S. R., Hellwig, F. M., & De Groot, A. M. B. (2005). Onset entropy matters: Letter-to-phoneme mappings in seven languages. Reading and Writing, 18, 211-229. doi:10.1007/s11145-005-3001-9.
  • Boves, L., Carlson, R., Hinrichs, E., House, D., Krauwer, S., Lemnitzer, L., Vainio, M., & Wittenburg, P. (2009). Resources for speech research: Present and future infrastructure needs. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 1803-1806).

    Abstract

    This paper introduces the EU-FP7 project CLARIN, a joint effort of over 150 institutions in Europe, aimed at the creation of a sustainable language resources and technology infrastructure for the humanities and social sciences research community. The paper briefly introduces the vision behind the project and how it relates to speech research with a focus on the contributions that CLARIN can and will make to research in spoken language processing.
  • Bowerman, M. (1975). Commentary on L. Bloom, P. Lightbown, & L. Hood, “Structure and variation in child language”. Monographs of the Society for Research in Child Development, 40(2), 80-90. Retrieved from http://www.jstor.org/stable/1165986.
  • Bramão, I., Faísca, L., Forkstam, C., Inácio, K., Petersson, K. M., & Reis, A. (2009). Interaction between perceptual color and color knowledge information in object recognition: Behavioral and electrophysiological evidence. In Abstracts presented at the International Neuropsychological Society, Finnish Neuropsychological Society, Joint Mid-Year Meeting July 29-August 1, 2009. Helsinki, Finland & Tallinn, Estonia (pp. 39). Retrieved from http://www.neuropsykologia.fi/ins2009/INS_MY09_Abstract.pdf.
  • Brandt, S., Kidd, E., Lieven, E., & Tomasello, M. (2009). The discourse bases of relativization: An investigation of young German and English-speaking children's comprehension of relative clauses. Cognitive Linguistics, 20(3), 539-570. doi:10.1515/COGL.2009.024.

    Abstract

    In numerous comprehension studies, across different languages, children have performed worse on object relatives (e.g., the dog that the cat chased) than on subject relatives (e.g., the dog that chased the cat). One possible reason for this is that the test sentences did not exactly match the kinds of object relatives that children typically experience. Adults and children usually hear and produce object relatives with inanimate heads and pronominal subjects (e.g., the car that we bought last year) (cf. Kidd et al., Language and Cognitive Processes 22: 860–897, 2007). We tested young 3-year old German- and English-speaking children with a referential selection task. Children from both language groups performed best in the condition where the experimenter described inanimate referents with object relatives that contained pronominal subjects (e.g., Can you give me the sweater that he bought?). Importantly, when the object relatives met the constraints identified in spoken discourse, children understood them as well as subject relatives, or even better. These results speak against a purely structural explanation for children's difficulty with object relatives as observed in previous studies, but rather support the usage-based account, according to which discourse function and experience with language shape the representation of linguistic structures.
  • Braun, B., Weber, A., & Crocker, M. (2005). Does narrow focus activate alternative referents? In Proceedings of the 9th European Conference on Speech Communication and Technology (pp. 1709-1712).

    Abstract

    Narrow focus refers to accent placement that forces one interpretation of a sentence, which is then often perceived contrastively. Narrow focus is formalised in terms of alternative sets, i.e. contextually or situationally salient alternatives. In this paper, we investigate whether this model is valid also in human utterance processing. We present an eye-tracking experiment to study listeners’ expectations (i.e. eye-movements) with respect to upcoming referents. Some of the objects contrast in colour with objects that were previously referred to, others do not; the objects are referred to with either a narrow focus on the colour adjective or with broad focus on the noun. Results show that narrow focus on the adjective increases early fixations to contrastive referents. Narrow focus hence activates alternative referents in human utterance processing
  • Broeder, D., Brugman, H., & Senft, G. (2005). Documentation of languages and archiving of language data at the Max Planck Institute for Psycholinguistics in Nijmegen. Linguistische Berichte, no. 201, 89-103.
  • Broersma, M. (2005). Perception of familiar contrasts in unfamiliar positions. Journal of the Acoustical Society of America, 117(6), 3890-3901. doi:10.1121/1.1906060.
  • Broersma, M. (2009). Triggered codeswitching between cognate languages. Bilingualism: Language and Cognition, 12(4), 447-462. doi:10.1017/S1366728909990204.
  • Brouwer, G. J., Tong, F., Hagoort, P., & Van Ee, R. (2009). Perceptual incongruence influences bistability and cortical activation. Plos One, 4(3): e5056. doi:10.1371/journal.pone.0005056.

    Abstract

    We employed a parametric psychophysical design in combination with functional imaging to examine the influence of metric changes in perceptual incongruence on perceptual alternation rates and cortical responses. Subjects viewed a bistable stimulus defined by incongruent depth cues; bistability resulted from incongruence between binocular disparity and monocular perspective cues that specify different slants (slant rivalry). Psychophysical results revealed that perceptual alternation rates were positively correlated with the degree of perceived incongruence. Functional imaging revealed systematic increases in activity that paralleled the psychophysical results within anterior intraparietal sulcus, prior to the onset of perceptual alternations. We suggest that this cortical activity predicts the frequency of subsequent alternations, implying a putative causal role for these areas in initiating bistable perception. In contrast, areas implicated in form and depth processing (LOC and V3A) were sensitive to the degree of slant, but failed to show increases in activity when these cues were in conflict.
  • Brown, P. (2005). What does it mean to learn the meaning of words? [Review of the book How children learn the meanings of words by Paul Bloom]. Journal of the Learning Sciences, 14(2), 293-300. doi:10.1207/s15327809jls1402_6.
  • Brown, P., & Levinson, S. C. (1992). 'Left' and 'right' in Tenejapa: Investigating a linguistic and conceptual gap. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 45(6), 590-611.

    Abstract

    From the perspective of a Kantian belief in the fundamental human tendency to cleave space along the three planes of the human body, Tenejapan Tzeltal exhibits a linguistic gap: there are no linguistic expressions that designate regions (as in English to my left) or describe the visual field (as in to the left of the tree) on the basis of a plane bisecting the body into a left and right side. Tenejapans have expressions for left and right hands (xin k'ab and wa'el k'ab), but these are basically body-part terms, they are not generalized to form a division of space. This paper describes the results of various elicited producton tasks in which concepts of left and right would provide a simple solution, showing that Tenejapan consultants use other notions even when the relevant linguistic distinctions could be made in Tzeltal (e.g. describing the position of one's limbs, or describing rotation of one's body). Instead of using the left-hand/right-hand distinction to construct a division of space, Tenejapans utilize a number of other systems: (i) an absolute, 'cardinal direction' system, supplemented by reference to other geographic or landmark directions, (ii) a generative segmentation of objects and places into analogic body-parts or other kinds of parts, and (iii) a rich system of positional adjectives to describe the exact disposition of things. These systems work conjointly to specify locations with precision and elegance. The overall system is not primarily egocentric, and it makes no essential reference to planes through the human body.
  • Brown, A., & Gullberg, M. (2005). Convergence in emerging and established language system: Evidence from speech and gesture in L1 Japanese. In Y. Terao, & k. Sawasaki (Eds.), Handbook of the 7th International Conference of the Japanese Society for Language Sciences (pp. 172-173). Tokyo: JSLS.
  • Brown, A. (2005). [Review of the book The resilience of language: What gesture creation in deaf children can tell us about how all children learn language by Susan Goldin-Meadow]. Linguistics, 43(3), 662-666.
  • Brucato, N., Cassar, O., Tonasso, L., Guitard, E., Migot-Nabias, F., Tortevoye, P., Plancoulaine, S., Larrouy, G., Gessain, A., & Dugoujon, J.-M. (2009). Genetic diversity and dynamics of the Noir Marron settlement in French Guyana: A study combining mitochondrial DNA, Y chromosome and HTLV-1 genotyping [Abstract]. AIDS Research and Human Retroviruses, 25(11), 1258. doi:10.1089/aid.2009.9992.

    Abstract

    The Noir Marron are the direct descendants of thousands of African slaves deported to the Guyanas during the Atlantic Slave Trade and later escaped mainly from Dutch colonial plantations. Six ethnic groups are officially recognized, four of which are located in French Guyana: the Aluku, the Ndjuka, the Saramaka, and the Paramaka. The aim of this study was: (1) to determine the Noir Marron settlement through genetic exchanges with other communities such as Amerindians and Europeans; (2) to retrace their origins in Africa. Buffy-coat DNA from 142 Noir Marron, currently living in French Guyana, were analyzed using mtDNA (typing of SNP coding regions and sequencing of HVSI/II) and Y chromosomes (typing STR and SNPs) to define their genetic profile. Results were compared to an African database composed by published data, updated with genotypes of 82 Fon from Benin, and 128 Ahizi and 63 Yacouba from the Ivory-Coast obtained in this study for the same markers. Furthermore, the determination of the genomic subtype of HTLV-1 strains (env gp21 and LTR regions), which can be used as a marker of migration of infected populations, was performed for samples from 23 HTLV-1 infected Noir Marron and compared with the corresponding database. MtDNA profiles showed a high haplotype diversity, in which 99% of samples belonged to the major haplogroup L, frequent in Africa. Each haplotype was largely represented on the West African coast, but notably higher homologies were obtained with the samples present in the Gulf of Guinea. Y Chromosome analysis revealed the same pattern, i.e. a conservation of the African contribution to the Noir Marron genetic profile, with 98% of haplotypes belonging to the major haplogroup E1b1a, frequent in West Africa. The genetic diversity was higher than those observed in African populations, proving the large Noir Marron’s fatherland, but a predominant identity in the Gulf of Guinea can be suggested. Concerning HTLV-1 genotyping, all the Noir Marron strains belonged to the large Cosmopolitan A subtype. However, among them 17/23 (74%) clustered with the West African clade comprizing samples originating from Ivory-Coast, Ghana, Burkina-Fasso and Senegal, while 3 others clustered in the Trans-Sahelian clade and the remaining 3 were similar to strains found in individuals in South America. Through the combined analyses of three approaches, we have provided a conclusive image of the genetic profile of the Noir Marron communities studied. The high degree of preservation of the African gene pool contradicts the expected gene flow that would correspond to the major cultural exchanges observed between Noir Marron, Europeans and Amerindians. Marital practices and historical events could explain these observations. Corresponding to historical and cultural data, the origin of the ethnic groups is widely dispatched throughout West Africa. However, all results converge to suggest an individualization from a major birthplace in the Gulf of Guinea.
  • Brucato, N., Tortevoye, P., Plancoulaine, S., Guitard, E., Sanchez-Mazas, A., Larrouy, G., Gessain, A., & Dugoujon, J.-M. (2009). The genetic diversity of three peculiar populations descending from the slave trade: Gm study of Noir Marron from French Guiana. Comptes Rendus Biologies, 332(10), 917-926. doi:10.1016/j.crvi.2009.07.005.

    Abstract

    The Noir Marron communities are the direct descendants of African slaves brought to the Guianas during the four centuries (16th to 19th) of the Atlantic slave trade. Among them, three major ethnic groups have been studied: the Aluku, the Ndjuka and the Saramaka. Their history led them to share close relationships with Europeans and Amerindians, as largely documented in their cultural records. The study of Gm polymorphisms of immunoglobulins may help to estimate the amount of gene flow linked to these cultural exchanges. Surprisingly, very low levels of European contribution (2.6%) and Amerindian contribution (1.7%) are detected in the Noir Marron gene pool. On the other hand, an African contribution of 95.7% redraws their origin to West Africa (FSTless-than-or-equals, slant0.15). This highly preserved African gene pool of the Noir Marron is unique in comparison to other African American populations of Latin America, who are notably more admixed

    Additional information

    Table 4
  • Burenhult, N. (2009). [Commentary on M. Meschiari, 'Roots of the savage mind: Apophenia and imagination as cognitive process']. Quaderni di semantica, 30(2), 239-242. doi:10.1400/127893.
  • Burenhult, N. (2003). Attention, accessibility, and the addressee: The case of the Jahai demonstrative ton. Pragmatics, 13(3), 363-379.
  • Burenhult, N., & Wegener, C. (2009). Preliminary notes on the phonology, orthography and vocabulary of Semnam (Austroasiatic, Malay Peninsula). Journal of the Southeast Asian Linguistics Society, 1, 283-312. Retrieved from http://www.jseals.org/.

    Abstract

    This paper reports tentatively some features of Semnam, a Central Aslian language spoken by some 250 people in the Perak valley, Peninsular Malaysia. It outlines the unusually rich phonemic system of this hitherto undescribed language (e.g. a vowel system comprising 36 distinctive nuclei), and proposes a practical orthography for it. It also includes the c. 1,250- item wordlist on which the analysis is based, collected intermittently in the field 2006-2008.
  • Burnham, D., Ambikairajah, E., Arciuli, J., Bennamoun, M., Best, C. T., Bird, S., Butcher, A. R., Cassidy, S., Chetty, G., Cox, F. M., Cutler, A., Dale, R., Epps, J. R., Fletcher, J. M., Goecke, R., Grayden, D. B., Hajek, J. T., Ingram, J. C., Ishihara, S., Kemp, N. and 10 moreBurnham, D., Ambikairajah, E., Arciuli, J., Bennamoun, M., Best, C. T., Bird, S., Butcher, A. R., Cassidy, S., Chetty, G., Cox, F. M., Cutler, A., Dale, R., Epps, J. R., Fletcher, J. M., Goecke, R., Grayden, D. B., Hajek, J. T., Ingram, J. C., Ishihara, S., Kemp, N., Kinoshita, Y., Kuratate, T., Lewis, T. W., Loakes, D. E., Onslow, M., Powers, D. M., Rose, P., Togneri, R., Tran, D., & Wagner, M. (2009). A blueprint for a comprehensive Australian English auditory-visual speech corpus. In M. Haugh, K. Burridge, J. Mulder, & P. Peters (Eds.), Selected proceedings of the 2008 HCSNet Workshop on Designing the Australian National Corpus (pp. 96-107). Somerville, MA: Cascadilla Proceedings Project.

    Abstract

    Large auditory-visual (AV) speech corpora are the grist of modern research in speech science, but no such corpus exists for Australian English. This is unfortunate, for speech science is the brains behind speech technology and applications such as text-to-speech (TTS) synthesis, automatic speech recognition (ASR), speaker recognition and forensic identification, talking heads, and hearing prostheses. Advances in these research areas in Australia require a large corpus of Australian English. Here the authors describe a blueprint for building the Big Australian Speech Corpus (the Big ASC), a corpus of over 1,100 speakers from urban and rural Australia, including speakers of non-indigenous, indigenous, ethnocultural, and disordered forms of Australian English, each of whom would be sampled on three occasions in a range of speech tasks designed by the researchers who would be using the corpus.
  • Campisi, E. (2009). La gestualità co-verbale tra comunicazione e cognizione: In che senso i gesti sono intenzionali. In F. Parisi, & M. Primo (Eds.), Natura, comunicazione, neurofilosofie. Atti del III convegno 2009 del CODISCO. Rome: Squilibri.
  • Casasanto, D., Willems, R. M., & Hagoort, P. (2009). Body-specific representations of action verbs: Evidence from fMRI in right- and left-handers. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (pp. 875-880). Austin: Cognitive Science Society.

    Abstract

    According to theories of embodied cognition, understanding a verb like throw involves unconsciously simulating the action throwing, using areas of the brain that support motor planning. If understanding action words involves mentally simulating our own actions, then the neurocognitive representation of word meanings should differ for people with different kinds of bodies, who perform actions in systematically different ways. In a test of the body-specificity hypothesis (Casasanto, 2009), we used fMRI to compare premotor activity correlated with action verb understanding in right- and left-handers. Right-handers preferentially activated left premotor cortex during lexical decision on manual action verbs (compared with non-manual action verbs), whereas left-handers preferentially activated right premotor areas. This finding helps refine theories of embodied semantics, suggesting that implicit mental simulation during language processing is body-specific: Right and left-handers, who perform actions differently, use correspondingly different areas of the brain for representing action verb meanings.
  • Casasanto, D. (2009). Embodiment of abstract concepts: Good and bad in right- and left-handers. Journal of Experimental Psychology: General, 138, 351-367. doi:10.1037/a0015854.

    Abstract

    Do people with different kinds of bodies think differently? According to the body-specificity hypothesis, people who interact with their physical environments in systematically different ways should form correspondingly different mental representations. In a test of this hypothesis, 5 experiments investigated links between handedness and the mental representation of abstract concepts with positive or negative valence (e.g., honesty, sadness, intelligence). Mappings from spatial location to emotional valence differed between right- and left-handed participants. Right-handers tended to associate rightward space with positive ideas and leftward space with negative ideas, but left-handers showed the opposite pattern, associating rightward space with negative ideas and leftward with positive ideas. These contrasting mental metaphors for valence cannot be attributed to linguistic experience, because idioms in English associate good with right but not with left. Rather, right- and left-handers implicitly associated positive valence more strongly with the side of space on which they could act more fluently with their dominant hands. These results support the body-specificity hypothesis and provide evidence for the perceptuomotor basis of even the most abstract ideas.
  • Casasanto, D., & Jasmin, K. (2009). Emotional valence is body-specific: Evidence from spontaneous gestures during US presidential debates. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (pp. 1965-1970). Austin: Cognitive Science Society.

    Abstract

    What is the relationship between motor action and emotion? Here we investigated whether people associate good things more strongly with the dominant side of their bodies, and bad things with the non-dominant side. To find out, we analyzed spontaneous gestures during speech expressing ideas with positive or negative emotional valence (e.g., freedom, pain, compassion). Samples of speech and gesture were drawn from the 2004 and 2008 US presidential debates, which involved two left-handers (Obama, McCain) and two right-handers (Kerry, Bush). Results showed a strong association between the valence of spoken clauses and the hands used to make spontaneous co-speech gestures. In right-handed candidates, right-hand gestures were more strongly associated with positive-valence clauses, and left-hand gestures with negative-valence clauses. Left-handed candidates showed the opposite pattern. Right- and left-handers implicitly associated positive valence more strongly with their dominant hand: the hand they can use more fluently. These results support the body-specificity hypothesis, (Casasanto, 2009), and suggest a perceptuomotor basis for even our most abstract ideas.
  • Casasanto, D. (2009). [Review of the book Music, language, and the brain by Aniruddh D. Patel]. Language and Cognition, 1(1), 143-146. doi:10.1515/LANGCOG.2009.007.
  • Casasanto, D., Fotakopoulou, O., & Boroditsky, L. (2009). Space and time in the child's mind: Evidence for a cross-dimensional asymmetry. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31st Annual Meeting of the Cognitive Science Society (pp. 1090-1095). Austin: Cognitive Science Society.

    Abstract

    What is the relationship between space and time in the human mind? Studies in adults show an asymmetric relationship between mental representations of these basic dimensions of experience: representations of time depend on space more than representations of space depend on time. Here we investigated the relationship between space and time in the developing mind. Native Greek-speaking children (N=99) watched movies of two animals traveling along parallel paths for different distances or durations and judged the spatial and temporal aspects of these events (e.g., Which animal went for a longer time, or a longer distance?) Results showed a reliable cross-dimensional asymmetry: for the same stimuli, spatial information influenced temporal judgments more than temporal information influenced spatial judgments. This pattern was robust to variations in the age of the participants and the type of language used to elicit responses. This finding demonstrates a continuity between space-time representations in children and adults, and informs theories of analog magnitude representation.
  • Cavaco, P., Curuklu, B., & Petersson, K. M. (2009). Artificial grammar recognition using two spiking neural networks. Frontiers in Neuroinformatics. Conference abstracts: 2nd INCF Congress of Neuroinformatics. doi:10.3389/conf.neuro.11.2009.08.096.

    Abstract

    In this paper we explore the feasibility of artificial (formal) grammar recognition (AGR) using spiking neural networks. A biologically inspired minicolumn architecture is designed as the basic computational unit. A network topography is defined based on the minicolumn architecture, here referred to as nodes, connected with excitatory and inhibitory connections. Nodes in the network represent unique internal states of the grammar’s finite state machine (FSM). Future work to improve the performance of the networks is discussed. The modeling framework developed can be used by neurophysiological research to implement network layouts and compare simulated performance characteristics to actual subject performance.
  • Chen, A., & De Ruiter, J. P. (2005). The role of pitch accent type in interpreting information status. Proceedings from the Annual Meeting of the Chicago Linguistic Society, 41(1), 33-48.

    Abstract

    The present study set out to pin down the role of four pitch accents, fall (H*L), rise-fall (L*HL), rise (L*H), fall-rise (H*LH), as well as deaccentuation, in interpreting new vs. given information in British English by the eyetracking paradigm. The pitch accents in question were claimed to convey information status in theories of English intonational meaning. There is, however, no consensus on the postulated roles of these pitch accents. Results clearly show that pitch accent type can and does matter when interpreting information status. The effects can be reflected in the mean proportions of fixations to the competitor in a selected time window. These patterns are also present in proportions of fixations to the target but to a lesser extent. Interestingly, the effects of pitch accent types are also reflected in how fast the participants could adjust their decision as to which picture to move before the name of the picture was fully revealed. For example, when the competitor was a given entity, the proportion of fixations to the competitor increased initially in most accent conditions in the first as a result of subjects' bias towards a given entity, but started to decrease substantially earlier in the H*L condition than in the L*H and deaccentuation conditions.
  • Chen, A., & Den Os, E. (2005). Effects of pitch accent type on interpreting information status in synthetic speech. In Proceedings of the 9th European Conference on Speech Communication and Technology (pp. 1913-1916).
  • Chen, J. (2005). Interpreting state-change: Learning the meaning of verbs and verb compounds in Mandarin. In Proceedings of the 29th Annual Boston University Conference on Language Development.

    Abstract

    This study investigates how Mandarin-speaking children interpret state-change verbs. In Mandarin, state-change is typically encoded with resultative verb compounds (RVCs), in which the first verb (V1) specifies an action and the second (V2) a result, for example, zhai-xia 'pick-descend' (= pick, pick off/down). Unlike English state-change verb such as pick, smash, mix and fill, the action verb (V1) may imply a state-change but it does not entail it; the state-change is specified by the additional result verb (V2). Previous studies have shown that children learning English and German tend to neglect the state-change meaning in monomorphemic state-change verbs like mix and fill (Gentner, 1978; Gropen et al, 1991) and verb-particle constructions like abplücken 'pick off' (Wittek, 1999, 2000) - they do not realize that this meaning is entailed. This study examines how Mandarin-speaking children interpret resultative verb compounds and the first verb of an RVC. Four groups of Mandarin-speaking children (mean ages 2;6, 3;6, 4;6, 6;1) and an adult group participated in a judgment task. The results show that Mandarin-speaking children know from a very young age that RVCs entail a state-change; ironically, however, they make a mistake that is just the opposite to that made by the learners of English and German: they often incorrectly interpret the action verb (V1) of an RVC as if it, in itself, also entails a state-change, even though it does not. This result suggests that children do not have a uniform strategy for interpreting verb meaning, but are influenced by the language-specific lexicalization patterns they encounter in their language.
  • Chen, A. (2003). Language dependence in continuation intonation. In M. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS.) (pp. 1069-1072). Rundle Mall, SA, Austr.: Causal Productions Pty.
  • Chen, X. S., Collins, L. J., Biggs, P. J., & Penny, D. (2009). High throughput genome-wide survey of small RNAs from the parasitic protists giardia intestinalis and trichomonas vaginalis. Genome biology and evolution, 1, 165-175. doi:10.1093/gbe/evp017.

    Abstract

    RNA interference (RNAi) is a set of mechanisms which regulate gene expression in eukaryotes. Key elements of RNAi are small sense and antisense RNAs from 19 to 26 nucleotides generated from double-stranded RNAs. miRNAs are a major type of RNAi-associated small RNAs and are found in most eukaryotes studied to date. To investigate whether small RNAs associated with RNAi appear to be present in all eukaryotic lineages, and therefore present in the ancestral eukaryote, we studied two deep-branching protozoan parasites, Giardia intestinalis and Trichomonas vaginalis. Little is known about endogenous small RNAs involved in RNAi of these organisms. Using Illumina Solexa sequencing and genome-wide analysis of small RNAs from these distantly related deep-branching eukaryotes, we identified 10 strong miRNA candidates from Giardia and 11 from Trichomonas. We also found evidence of Giardia siRNAs potentially involved in the expression of variant-specific-surface proteins. In addition, 8 new snoRNAs from Trichomonas are identified. Our results indicate that miRNAs are likely to be general in ancestral eukaryotes, and therefore are likely to be a universal feature of eukaryotes.
  • Chen, A. (2009). Intonation and reference maintenance in Turkish learners of Dutch: A first insight. AILE - Acquisition et Interaction en Langue Etrangère, 28(2), 67-91.

    Abstract

    This paper investigates L2 learners’ use of intonation in reference maintenance in comparison to native speakers at three longitudinal points. Nominal referring expressions were elicited from two untutored Turkish learners of Dutch and five native speakers of Dutch via a film retelling task, and were analysed in terms of pitch span and word duration. Effects of two types of change in information states were examined, between new and given and between new and accessible. We found native-like use of word duration in both types of change early on but different performances between learners and development over time in one learner in the use of pitch span. Further, the use of morphosyntactic devices had different effects on the two learners. The inter-learner differences and late systematic use of pitch span, in spite of similar use of pitch span in learners’ L1 and L2, suggest that learning may play a role in the acquisition of intonation as a device for reference maintenance.
  • Chen, A. (2009). Perception of paralinguistic intonational meaning in a second language. Language Learning, 59(2), 367-409.
  • Chen, A. (2003). Reaction time as an indicator to discrete intonational contrasts in English. In Proceedings of Eurospeech 2003 (pp. 97-100).

    Abstract

    This paper reports a perceptual study using a semantically motivated identification task in which we investigated the nature of two pairs of intonational contrasts in English: (1) normal High accent vs. emphatic High accent; (2) early peak alignment vs. late peak alignment. Unlike previous inquiries, the present study employs an on-line method using the Reaction Time measurement, in addition to the measurement of response frequencies. Regarding the peak height continuum, the mean RTs are shortest for within-category identification but longest for across-category identification. As for the peak alignment contrast, no identification boundary emerges and the mean RTs only reflect a difference between peaks aligned with the vowel onset and peaks aligned elsewhere. We conclude that the peak height contrast is discrete but the previously claimed discreteness of the peak alignment contrast is not borne out.
  • Cho, T., & McQueen, J. M. (2005). Prosodic influences on consonant production in Dutch: Effects of prosodic boundaries, phrasal accent and lexical stress. Journal of Phonetics, 33(2), 121-157. doi:10.1016/j.wocn.2005.01.001.

    Abstract

    Prosodic influences on phonetic realizations of four Dutch consonants (/t d s z/) were examined. Sentences were constructed containing these consonants in word-initial position; the factors lexical stress, phrasal accent and prosodic boundary were manipulated between sentences. Eleven Dutch speakers read these sentences aloud. The patterns found in acoustic measurements of these utterances (e.g., voice onset time (VOT), consonant duration, voicing during closure, spectral center of gravity, burst energy) indicate that the low-level phonetic implementation of all four consonants is modulated by prosodic structure. Boundary effects on domain-initial segments were observed in stressed and unstressed syllables, extending previous findings which have been on stressed syllables alone. Three aspects of the data are highlighted. First, shorter VOTs were found for /t/ in prosodically stronger locations (stressed, accented and domain-initial), as opposed to longer VOTs in these positions in English. This suggests that prosodically driven phonetic realization is bounded by language-specific constraints on how phonetic features are specified with phonetic content: Shortened VOT in Dutch reflects enhancement of the phonetic feature {−spread glottis}, while lengthened VOT in English reflects enhancement of {+spread glottis}. Prosodic strengthening therefore appears to operate primarily at the phonetic level, such that prosodically driven enhancement of phonological contrast is determined by phonetic implementation of these (language-specific) phonetic features. Second, an accent effect was observed in stressed and unstressed syllables, and was independent of prosodic boundary size. The domain of accentuation in Dutch is thus larger than the foot. Third, within a prosodic category consisting of those utterances with a boundary tone but no pause, tokens with syntactically defined Phonological Phrase boundaries could be differentiated from the other tokens. This syntactic influence on prosodic phrasing implies the existence of an intermediate-level phrase in the prosodic hierarchy of Dutch.
  • Cho, T. (2005). Prosodic strengthening and featural enhancement: Evidence from acoustic and articulatory realizations of /a,i/ in English. Journal of the Acoustical Society of America, 117(6), 3867-3878. doi:10.1121/1.1861893.
  • Cho, T. (2003). Lexical stress, phrasal accent and prosodic boundaries in the realization of domain-initial stops in Dutch. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhs 2003) (pp. 2657-2660). Adelaide: Causal Productions.

    Abstract

    This study examines the effects of prosodic boundaries, lexical stress, and phrasal accent on the acoustic realization of stops (/t, d/) in Dutch, with special attention paid to language-specificity in the phonetics-prosody interface. The results obtained from various acoustic measures show systematic phonetic variations in the production of /t d/ as a function of prosodic position, which may be interpreted as being due to prosodicallyconditioned articulatory strengthening. Shorter VOTs were found for the voiceless stop /t/ in prosodically stronger locations (as opposed to longer VOTs in this position in English). The results suggest that prosodically-driven phonetic realization is bounded by a language-specific phonological feature system.
  • Cholin, J., & Levelt, W. J. M. (2009). Effects of syllable preparation and syllable frequency in speech production: Further evidence for syllabic units at a post-lexical level. Language and Cognitive Processes, 24, 662-684. doi:10.1080/01690960802348852.

    Abstract

    In the current paper, we asked at what level in the speech planning process speakers retrieve stored syllables. There is evidence that syllable structure plays an essential role in the phonological encoding of words (e.g., online syllabification and phonological word formation). There is also evidence that syllables are retrieved as whole units. However, findings that clearly pinpoint these effects to specific levels in speech planning are scarce. We used a naming variant of the implicit priming paradigm to contrast voice onset latencies for frequency-manipulated disyllabic Dutch pseudo-words. While prior implicit priming studies only manipulated the item's form and/or syllable structure overlap we introduced syllable frequency as an additional factor. If the preparation effect for syllables obtained in the implicit priming paradigm proceeds beyond phonological planning, i.e., includes the retrieval of stored syllables, then the preparation effect should differ for high- and low frequency syllables. The findings reported here confirm this prediction: Low-frequency syllables benefit significantly more from the preparation than high-frequency syllables. Our findings support the notion of a mental syllabary at a post-lexical level, between the levels of phonological and phonetic encoding.
  • Chu, M., & Kita, S. (2009). Co-speech gestures do not originate from speech production processes: Evidence from the relationship between co-thought and co-speech gestures. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society (pp. 591-595). Austin, TX: Cognitive Science Society.

    Abstract

    When we speak, we spontaneously produce gestures (co-speech gestures). Co-speech gestures and speech production are closely interlinked. However, the exact nature of the link is still under debate. To addressed the question that whether co-speech gestures originate from the speech production system or from a system independent of the speech production, the present study examined the relationship between co-speech and co-thought gestures. Co-thought gestures, produced during silent thinking without speaking, presumably originate from a system independent of the speech production processes. We found a positive correlation between the production frequency of co-thought and co-speech gestures, regardless the communicative function that co-speech gestures might serve. Therefore, we suggest that co-speech gestures and co-thought gestures originate from a common system that is independent of the speech production processes
  • Collins, L. J., & Chen, X. S. (2009). Ancestral RNA: The RNA biology of the eukaryotic ancestor. RNA Biology, 6(5), 495-502. doi:10.4161/rna.6.5.9551.

    Abstract

    Our knowledge of RNA biology within eukaryotes has exploded over the last five years. Within new research we see that some features that were once thought to be part of multicellular life have now been identified in several protist lineages. Hence, it is timely to ask which features of eukaryote RNA biology are ancestral to all eukaryotes. We focus on RNA-based regulation and epigenetic mechanisms that use small regulatory ncRNAs and long ncRNAs, to highlight some of the many questions surrounding eukaryotic ncRNA evolution.
  • Coombs, P. J., Graham, S. A., Drickamer, K., & Taylor, M. E. (2005). Selective binding of the scavenger receptor C-type lectin to Lewisx trisaccharide and related glycan ligands. The Journal of Biological Chemistry, 280, 22993-22999. doi:10.1074/jbc.M504197200.

    Abstract

    The scavenger receptor C-type lectin (SRCL) is an endothelial receptor that is similar in organization to type A scavenger receptors for modified low density lipoproteins but contains a C-type carbohydrate-recognition domain (CRD). Fragments of the receptor consisting of the entire extracellular domain and the CRD have been expressed and characterized. The extracellular domain is a trimer held together by collagen-like and coiled-coil domains adjacent to the CRD. The amino acid sequence of the CRD is very similar to the CRD of the asialoglycoprotein receptor and other galactose-specific receptors, but SRCL binds selectively to asialo-orosomucoid rather than generally to asialoglycoproteins. Screening of a glycan array and further quantitative binding studies indicate that this selectivity results from high affinity binding to glycans bearing the Lewis(x) trisaccharide. Thus, SRCL shares with the dendritic cell receptor DC-SIGN the ability to bind the Lewis(x) epitope. However, it does so in a fundamentally different way, making a primary binding interaction with the galactose moiety of the glycan rather than the fucose residue. SRCL shares with the asialoglycoprotein receptor the ability to mediate endocytosis and degradation of glycoprotein ligands. These studies suggest that SRCL might be involved in selective clearance of specific desialylated glycoproteins from circulation and/or interaction of cells bearing Lewis(x)-type structures with the vascular endothelium.
  • Cozijn, R., Vonk, W., & Noordman, L. G. M. (2003). Afleidingen uit oogbewegingen: De invloed van het connectief 'omdat' op het maken van causale inferenties. Gramma/TTT, 9, 141-156.
  • Cronin, K. A., Kurian, A. V., & Snowdon, C. T. (2005). Cooperative problem solving in a cooperatively breeding primate. Animal Behaviour, 69, 133-142. doi:10.1016/j.anbehav.2004.02.024.

    Abstract

    We investigated cooperative problem solving in unrelated pairs of the cooperatively breeding cottontop tamarin, Saguinus oedipus, to assess the cognitive basis of cooperative behaviour in this species and to compare abilities with other apes and monkeys. A transparent apparatus was used that required extension of two handles at opposite ends of the apparatus for access to rewards. Resistance was applied to both handles so that two tamarins had to act simultaneously in order to receive rewards. In contrast to several previous studies of cooperation, both tamarins received rewards as a result of simultaneous pulling. The results from two experiments indicated that the cottontop tamarins (1) had a much higher success rate and efficiency of pulling than many of the other species previously studied, (2) adjusted pulling behaviour to the presence or absence of a partner, and (3) spontaneously developed sustained pulling techniques to solve the task. These findings suggest that cottontop tamarins understand the role of the partner in this cooperative task, a cognitive ability widely ascribed only to great apes. The cooperative social system of tamarins, the intuitive design of the apparatus, and the provision of rewards to both participants may explain the performance of the tamarins.
  • Cronin, K. A., Schroeder, K. K. E., Rothwell, E. S., Silk, J. B., & Snowdon, C. T. (2009). Cooperatively breeding cottontop tamarins (Saguinus oedipus) do not donate rewards to their long-term mates. Journal of Comparative Psychology, 123(3), 231-241. doi:10.1037/a0015094.

    Abstract

    This study tested the hypothesis that cooperative breeding facilitates the emergence of prosocial behavior by presenting cottontop tamarins (Saguinus oedipus) with the option to provide food rewards to pair-bonded mates. In Experiment 1, tamarins could provide rewards to mates at no additional cost while obtaining rewards for themselves. Contrary to the hypothesis, tamarins did not demonstrate a preference to donate rewards, behaving similar to chimpanzees in previous studies. In Experiment 2, the authors eliminated rewards for the donor for a stricter test of prosocial behavior, while reducing separation distress and food preoccupation. Again, the authors found no evidence for a donation preference. Furthermore, tamarins were significantly less likely to deliver rewards to mates when the mate displayed interest in the reward. The results of this study contrast with those recently reported for cooperatively breeding common marmosets, and indicate that prosocial preferences in a food donation task do not emerge in all cooperative breeders. In previous studies, cottontop tamarins have cooperated and reciprocated to obtain food rewards; the current findings sharpen understanding of the boundaries of cottontop tamarins’ food-provisioning behavior.
  • Cutler, A., Murty, L., & Otake, T. (2003). Rhythmic similarity effects in non-native listening? In Proceedings of the 15th International Congress of Phonetic Sciences (PCPhS 2003) (pp. 329-332). Adelaide: Causal Productions.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. This language-specificity affects listening to non- native speech, if native procedures are applied even though inefficient for the non-native language. However, speakers of two languages with similar rhythmic interpretation should segment their own and the other language similarly. This was observed to date only for related languages (English-Dutch; French-Spanish). We now report experiments in which Japanese listeners heard Telugu, a Dravidian language unrelated to Japanese, and Telugu listeners heard Japanese. In both cases detection of target sequences in speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. These results suggest that Telugu and Japanese listeners use similar procedures in segmenting speech, and support the idea that languages fall into rhythmic classes, with aspects of phonological structure affecting listeners' speech segmentation.
  • Cutler, A. (2005). The lexical statistics of word recognition problems caused by L2 phonetic confusion. In Proceedings of the 9th European Conference on Speech Communication and Technology (pp. 413-416).
  • Cutler, A., McQueen, J. M., & Norris, D. (2005). The lexical utility of phoneme-category plasticity. In Proceedings of the ISCA Workshop on Plasticity in Speech Perception (PSP2005) (pp. 103-107).
  • Cutler, A., Smits, R., & Cooper, N. (2005). Vowel perception: Effects of non-native language vs. non-native dialect. Speech Communication, 47(1-2), 32-42. doi:10.1016/j.specom.2005.02.001.

    Abstract

    Three groups of listeners identified the vowel in CV and VC syllables produced by an American English talker. The listeners were (a) native speakers of American English, (b) native speakers of Australian English (different dialect), and (c) native speakers of Dutch (different language). The syllables were embedded in multispeaker babble at three signal-to-noise ratios (0 dB, 8 dB, and 16 dB). The identification performance of native listeners was significantly better than that of listeners with another language but did not significantly differ from the performance of listeners with another dialect. Dialect differences did however affect the type of perceptual confusions which listeners made; in particular, the Australian listeners’ judgements of vowel tenseness were more variable than the American listeners’ judgements, which may be ascribed to cross-dialectal differences in this vocalic feature. Although listening difficulty can result when speech input mismatches the native dialect in terms of the precise cues for and boundaries of phonetic categories, the difficulty is very much less than that which arises when speech input mismatches the native language in terms of the repertoire of phonemic categories available.
  • Cutler, A. (2005). Why is it so hard to understand a second language in noise? Newsletter, American Association of Teachers of Slavic and East European Languages, 48, 16-16.
  • Cutler, A. (1992). Cross-linguistic differences in speech segmentation. MRC News, 56, 8-9.
  • Cutler, A., & Norris, D. (1992). Detection of vowels and consonants with minimal acoustic variation. Speech Communication, 11, 101-108. doi:10.1016/0167-6393(92)90004-Q.

    Abstract

    Previous research has shown that, in a phoneme detection task, vowels produce longer reaction times than consonants, suggesting that they are harder to perceive. One possible explanation for this difference is based upon their respective acoustic/articulatory characteristics. Another way of accounting for the findings would be to relate them to the differential functioning of vowels and consonants in the syllabic structure of words. In this experiment, we examined the second possibility. Targets were two pairs of phonemes, each containing a vowel and a consonant with similar phonetic characteristics. Subjects heard lists of English words had to press a response key upon detecting the occurrence of a pre-specified target. This time, the phonemes which functioned as vowels in syllabic structure yielded shorter reaction times than those which functioned as consonants. This rules out an explanation for response time difference between vowels and consonants in terms of function in syllable structure. Instead, we propose that consonantal and vocalic segments differ with respect to variability of tokens, both in the acoustic realisation of targets and in the representation of targets by listeners.
  • Cutler, A. (1986). Forbear is a homophone: Lexical prosody does not constrain lexical access. Language and Speech, 29, 201-220.

    Abstract

    Because stress can occur in any position within an Eglish word, lexical prosody could serve as a minimal distinguishing feature between pairs of words. However, most pairs of English words with stress pattern opposition also differ vocalically: OBject an obJECT, CONtent and content have different vowels in their first syllables an well as different stress patters. To test whether prosodic information is made use in auditory word recognition independently of segmental phonetic information, it is necessary to examine pairs like FORbear – forBEAR of TRUSty – trusTEE, semantically unrelated words which echbit stress pattern opposition but no segmental difference. In a cross-modal priming task, such words produce the priming effects characteristic of homophones, indicating that lexical prosody is not used in the same was as segmental structure to constrain lexical access.
  • Cutler, A. (2009). Greater sensitivity to prosodic goodness in non-native than in native listeners. Journal of the Acoustical Society of America, 125, 3522-3525. doi:10.1121/1.3117434.

    Abstract

    English listeners largely disregard suprasegmental cues to stress in recognizing words. Evidence for this includes the demonstration of Fear et al. [J. Acoust. Soc. Am. 97, 1893–1904 (1995)] that cross-splicings are tolerated between stressed and unstressed full vowels (e.g., au- of autumn, automata). Dutch listeners, however, do exploit suprasegmental stress cues in recognizing native-language words. In this study, Dutch listeners were presented with English materials from the study of Fear et al. Acceptability ratings by these listeners revealed sensitivity to suprasegmental mismatch, in particular, in replacements of unstressed full vowels by higher-stressed vowels, thus evincing greater sensitivity to prosodic goodness than had been shown by the original native listener group.
  • Cutler, A., Kearns, R., Norris, D., & Scott, D. (1992). Listeners’ responses to extraneous signals coincident with English and French speech. In J. Pittam (Ed.), Proceedings of the 4th Australian International Conference on Speech Science and Technology (pp. 666-671). Canberra: Australian Speech Science and Technology Association.

    Abstract

    English and French listeners performed two tasks - click location and speeded click detection - with both English and French sentences, closely matched for syntactic and phonological structure. Clicks were located more accurately in open- than in closed-class words in both English and French; they were detected more rapidly in open- than in closed-class words in English, but not in French. The two listener groups produced the same pattern of responses, suggesting that higher-level linguistic processing was not involved in these tasks.
  • Cutler, A., Davis, C., & Kim, J. (2009). Non-automaticity of use of orthographic knowledge in phoneme evaluation. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 380-383). Causal Productions Pty Ltd.

    Abstract

    Two phoneme goodness rating experiments addressed the role of orthographic knowledge in the evaluation of speech sounds. Ratings for the best tokens of /s/ were higher in words spelled with S (e.g., bless) than in words where /s/ was spelled with C (e.g., voice). This difference did not appear for analogous nonwords for which every lexical neighbour had either S or C spelling (pless, floice). Models of phonemic processing incorporating obligatory influence of lexical information in phonemic processing cannot explain this dissociation; the data are consistent with models in which phonemic decisions are not subject to necessary top-down lexical influence.
  • Cutler, A. (1986). Phonological structure in speech recognition. Phonology Yearbook, 3, 161-178. Retrieved from http://www.jstor.org/stable/4615397.

    Abstract

    Two bodies of recent research from experimental psycholinguistics are summarised, each of which is centred upon a concept from phonology: LEXICAL STRESS and the SYLLABLE. The evidence indicates that neither construct plays a role in prelexical representations during speech recog- nition. Both constructs, however, are well supported by other performance evidence. Testing phonological claims against performance evidence from psycholinguistics can be difficult, since the results of studies designed to test processing models are often of limited relevance to phonological theory.
  • Cutler, A. (1992). Proceedings with confidence. New Scientist, (1825), 54.
  • Cutler, A., & Swinney, D. A. (1986). Prosody and the development of comprehension. Journal of Child Language, 14, 145-167.

    Abstract

    Four studies are reported in which young children’s response time to detect word targets was measured. Children under about six years of age did not show response time advantage for accented target words which adult listeners show. When semantic focus of the target word was manipulated independently of accent, children of about five years of age showed an adult-like response time advantage for focussed targets, but children younger than five did not. Id is argued that the processing advantage for accented words reflect the semantic role of accent as an expression of sentence focus. Processing advantages for accented words depend on the prior development of representations of sentence semantic structure, including the concept of focus. The previous literature on the development of prosodic competence shows an apparent anomaly in that young children’s productive skills appear to outstrip their receptive skills; however, this anomaly disappears if very young children’s prosody is assumed to be produced without an underlying representation of the relationship between prosody and semantics.
  • Cutler, A., & Robinson, T. (1992). Response time as a metric for comparison of speech recognition by humans and machines. In J. Ohala, T. Neary, & B. Derwing (Eds.), Proceedings of the Second International Conference on Spoken Language Processing: Vol. 1 (pp. 189-192). Alberta: University of Alberta.

    Abstract

    The performance of automatic speech recognition systems is usually assessed in terms of error rate. Human speech recognition produces few errors, but relative difficulty of processing can be assessed via response time techniques. We report the construction of a measure analogous to response time in a machine recognition system. This measure may be compared directly with human response times. We conducted a trial comparison of this type at the phoneme level, including both tense and lax vowels and a variety of consonant classes. The results suggested similarities between human and machine processing in the case of consonants, but differences in the case of vowels.
  • Cutler, A., & Butterfield, S. (1992). Rhythmic cues to speech segmentation: Evidence from juncture misperception. Journal of Memory and Language, 31, 218-236. doi:10.1016/0749-596X(92)90012-M.

    Abstract

    Segmentation of continuous speech into its component words is a nontrivial task for listeners. Previous work has suggested that listeners develop heuristic segmentation procedures based on experience with the structure of their language; for English, the heuristic is that strong syllables (containing full vowels) are most likely to be the initial syllables of lexical words, whereas weak syllables (containing central, or reduced, vowels) are nonword-initial, or, if word-initial, are grammatical words. This hypothesis is here tested against natural and laboratory-induced missegmentations of continuous speech. Precisely the expected pattern is found: listeners erroneously insert boundaries before strong syllables but delete them before weak syllables; boundaries inserted before strong syllables produce lexical words, while boundaries inserted before weak syllables produce grammatical words.
  • Cutler, A., & Butterfield, S. (1986). The perceptual integrity of initial consonant clusters. In R. Lawrence (Ed.), Speech and Hearing: Proceedings of the Institute of Acoustics (pp. 31-36). Edinburgh: Institute of Acoustics.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable’s differing role in the segmentation of French and English. Journal of Memory and Language, 25, 385-400. doi:10.1016/0749-596X(86)90033-1.

    Abstract

    Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1992). The monolingual nature of speech segmentation by bilinguals. Cognitive Psychology, 24, 381-410.

    Abstract

    Monolingual French speakers employ a syllable-based procedure in speech segmentation; monolingual English speakers use a stress-based segmentation procedure and do not use the syllable-based procedure. In the present study French-English bilinguals participated in segmentation experiments with English and French materials. Their results as a group did not simply mimic the performance of English monolinguals with English language materials and of French monolinguals with French language materials. Instead, the bilinguals formed two groups, defined by forced choice of a dominant language. Only the French-dominant group showed syllabic segmentation and only with French language materials. The English-dominant group showed no syllabic segmentation in either language. However, the English-dominant group showed stress-based segmentation with English language materials; the French-dominant group did not. We argue that rhythmically based segmentation procedures are mutually exclusive, as a consequence of which speech segmentation by bilinguals is, in one respect at least, functionally monolingual.
  • Cutler, A., Otake, T., & McQueen, J. M. (2009). Vowel devoicing and the perception of spoken Japanese words. Journal of the Acoustical Society of America, 125(3), 1693-1703. doi:10.1121/1.3075556.

    Abstract

    Three experiments, in which Japanese listeners detected Japanese words embedded in nonsense sequences, examined the perceptual consequences of vowel devoicing in that language. Since vowelless sequences disrupt speech segmentation [Norris et al. (1997). Cognit. Psychol. 34, 191– 243], devoicing is potentially problematic for perception. Words in initial position in nonsense sequences were detected more easily when followed by a sequence containing a vowel than by a vowelless segment (with or without further context), and vowelless segments that were potential devoicing environments were no easier than those not allowing devoicing. Thus asa, “morning,” was easier in asau or asazu than in all of asap, asapdo, asaf, or asafte, despite the fact that the /f/ in the latter two is a possible realization of fu, with devoiced [u]. Japanese listeners thus do not treat devoicing contexts as if they always contain vowels. Words in final position in nonsense sequences, however, produced a different pattern: here, preceding vowelless contexts allowing devoicing impeded word detection less strongly (so, sake was detected less accurately, but not less rapidly, in nyaksake—possibly arising from nyakusake—than in nyagusake). This is consistent with listeners treating consonant sequences as potential realizations of parts of existing lexical candidates wherever possible.
  • Cutler, A. (1986). Why readers of this newsletter should run cross-linguistic experiments. European Psycholinguistics Association Newsletter, 13, 4-8.
  • Cutler, A., & Fay, D. (1975). You have a Dictionary in your Head, not a Thesaurus. Texas Linguistic Forum, 1, 27-40.
  • Dabrowska, E., Rowland, C. F., & Theakston, A. (2009). The acquisition of questions with long-distance dependencies. Cognitive Linguistics, 20(3), 571-597. doi:10.1515/COGL.2009.025.

    Abstract

    A number of researchers have claimed that questions and other constructions with long distance dependencies (LDDs) are acquired relatively early, by age 4 or even earlier, in spite of their complexity. Analysis of LDD questions in the input available to children suggests that they are extremely stereotypical, raising the possibility that children learn lexically specific templates such as WH do you think S-GAP? rather than general rules of the kind postulated in traditional linguistic accounts of this construction. We describe three elicited imitation experiments with children aged from 4;6 to 6;9 and adult controls. Participants were asked to repeat prototypical questions (i.e., questions which match the hypothesised template), unprototypical questions (which depart from it in several respects) and declarative counterparts of both types of interrogative sentences. The children performed significantly better on the prototypical variants of both constructions, even when both variants contained exactly the same lexical material, while adults showed prototypicality e¤ects for LDD questions only. These results suggest that a general declarative complementation construction emerges quite late in development (after age 6), and that even adults rely on lexically specific templates for LDD questions.
  • Dahan, D., & Tanenhaus, M. K. (2005). Looking at the rope when looking for the snake: Conceptually mediated eye movements during spoken-word recognition. Psychonomic Bulletin & Review, 12(3), 453-459.

    Abstract

    Participants' eye movements to four objects displayed on a computer screen were monitored as the participants clicked on the object named in a spoken instruction. The display contained pictures of the referent (e.g., a snake), a competitor that shared features with the visual representation associated with the referent's concept (e.g., a rope), and two distractor objects (e.g., a couch and an umbrella). As the first sounds of the referent's name were heard, the participants were more likely to fixate the visual competitor than to fixate either of the distractor objects. Moreover, this effect was not modulated by the visual similarity between the referent and competitor pictures, independently estimated in a visual similarity rating task. Because the name of the visual competitor did not overlap with the phonetic input, eye movements reflected word-object matching at the level of lexically activated perceptual features and not merely at the level of preactivated sound forms.
  • Damian, M. F., & Abdel Rahman, R. (2003). Semantic priming in the naming of objects and famous faces. British Journal of Psychology, 94(4), 517-527.

    Abstract

    Researchers interested in face processing have recently debated whether access to the name of a known person occurs in parallel with retrieval of semantic-biographical codes, rather than in a sequential fashion. Recently, Schweinberger, Burton, and Kelly (2001) took a failure to obtain a semantic context effect in a manual syllable judgment task on names of famous faces as support for this position. In two experiments, we compared the effects of visually presented categorically related prime words with either objects (e.g. prime: animal; target: dog) or faces of celebrities (e.g. prime: actor; target: Bruce Willis) as targets. Targets were either manually categorized with regard to the number of syllables (as in Schweinberger et al.), or they were overtly named. For neither objects nor faces was semantic priming obtained in syllable decisions; crucially, however, priming was obtained when objects and faces were overtly named. These results suggest that both face and object naming are susceptible to semantic context effects

Share this page