Publications

Displaying 401 - 500 of 635
  • Lüpke, F. (2005). A grammar of Jalonke argument structure. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.59381.
  • MacDermot, K. D., Bonora, E., Sykes, N., Coupe, A.-M., Lai, C. S. L., Vernes, S. C., Vargha-Khadem, F., McKenzie, F., Smith, R. L., Monaco, A. P., & Fisher, S. E. (2005). Identification of FOXP2 truncation as a novel cause of developmental speech and language deficits. American Journal of Human Genetics, 76(6), 1074-1080. doi:10.1086/430841.

    Abstract

    FOXP2, the first gene to have been implicated in a developmental communication disorder, offers a unique entry point into neuromolecular mechanisms influencing human speech and language acquisition. In multiple members of the well-studied KE family, a heterozygous missense mutation in FOXP2 causes problems in sequencing muscle movements required for articulating speech (developmental verbal dyspraxia), accompanied by wider deficits in linguistic and grammatical processing. Chromosomal rearrangements involving this locus have also been identified. Analyses of FOXP2 coding sequence in typical forms of specific language impairment (SLI), autism, and dyslexia have not uncovered any etiological variants. However, no previous study has performed mutation screening of children with a primary diagnosis of verbal dyspraxia, the most overt feature of the disorder in affected members of the KE family. Here, we report investigations of the entire coding region of FOXP2, including alternatively spliced exons, in 49 probands affected with verbal dyspraxia. We detected variants that alter FOXP2 protein sequence in three probands. One such variant is a heterozygous nonsense mutation that yields a dramatically truncated protein product and cosegregates with speech and language difficulties in the proband, his affected sibling, and their mother. Our discovery of the first nonsense mutation in FOXP2 now opens the door for detailed investigations of neurodevelopment in people carrying different etiological variants of the gene. This endeavor will be crucial for gaining insight into the role of FOXP2 in human cognition.
  • Magyari, L. (2005). A nyelv miért nem olyan, mint a szem? (Why is language not like vertebrate eye?). In J. Gervain, K. Kovács, Á. Lukács, & M. Racsmány (Eds.), Az ezer arcú elme (The mind with thousand faces) (first edition, pp. 452-460). Budapest: Akadémiai Kiadó.
  • Marinis, T., Roberts, L., Felser, C., & Clahsen, H. (2005). Gaps in second language sentence processing. Studies in Second Language Acquisition, 27(1), 53-78. doi:10.1017/S0272263105050035.

    Abstract

    Four groups of second language (L2) learners of English from different language backgrounds (Chinese, Japanese, German, and Greek) and a group of native speaker controls participated in an online reading time experiment with sentences involving long-distance wh-dependencies. Although the native speakers showed evidence of making use of intermediate syntactic gaps during processing, the L2 learners appeared to associate the fronted wh-phrase directly with its lexical subcategorizer, regardless of whether the subjacency constraint was operative in their native language. This finding is argued to support the hypothesis that nonnative comprehenders underuse syntactic information in L2 processing.
  • Massaro, D. W., & Jesse, A. (2005). The magic of reading: Too many influences for quick and easy explanations. In T. Trabasso, J. Sabatini, D. W. Massaro, & R. C. Calfee (Eds.), From orthography to pedagogy: Essays in honor of Richard L. Venezky. (pp. 37-61). Mahwah, NJ: Lawrence Erlbaum Associates.

    Abstract

    Words are fundamental to reading and yet over a century of research has not masked the controversies around how words are recognized. We review some old and new research that disproves simple ideas such as words are read as wholes or are simply mapped directly to spoken language. We also review theory and research relevant to the question of sublexical influences in word recognition. We describe orthography and phonology, how they are related to each other and describe a series of new experiments on how these sources of information are processed. Tasks include lexical decision, perceptual identification, and naming. Dependent measures are reaction time, accuracy of performance, and a new measure, initial phoneme duration, that refers to the duration of the first phoneme when the target word is pronounced. Important factors in resolving the controversies include the realization that reading has multiple determinants, as well as evaluating the type of task, proper controls such as familiarity of the test items and accuracy of measurement of the response. We also address potential limitations with measures related to the mapping between orthography and phonology, and show that the existence of a sound-to-spelling consistency effect does not require interactive activation, but can be explained and predicted by a feedforward model, the Fuzzy logical model of perception.
  • Matsuo, A. (2005). [Review of the book Children's discourse: Person, space and time across languages by Maya Hickmann]. Linguistics, 43(3), 653-657. doi:10.1515/ling.2005.43.3.653.
  • McQueen, J. M. (2005). Speech perception. In K. Lamberts, & R. Goldstone (Eds.), The Handbook of Cognition (pp. 255-275). London: Sage Publications.
  • McQueen, J. M. (2005). Spoken word recognition and production: Regular but not inseparable bedfellows. In A. Cutler (Ed.), Twenty-first century psycholinguistics: Four cornerstones (pp. 229-244). Mahwah, NJ: Erlbaum.
  • McQueen, J. M., & Cutler, A. (1997). Cognitive processes in speech perception. In W. J. Hardcastle, & J. D. Laver (Eds.), The handbook of phonetic sciences (pp. 556-585). Oxford: Blackwell.
  • McQueen, J. M., & Sereno, J. (2005). Cleaving automatic processes from strategic biases in phonological priming. Memory & Cognition, 33(7), 1185-1209.

    Abstract

    In a phonological priming experiment using spoken Dutch words, Dutch listeners were taught varying expectancies and relatedness relations about the phonological form of target words, given particular primes. They learned to expect that, after a particular prime, if the target was a word, it would be from a specific phonological category. The expectancy either involved phonological overlap (e.g., honk-vonk, “base-spark”; expected related) or did not (e.g., nest-galm, “nest-boom”; expected unrelated, where the learned expectation after hearing nest was a word rhyming in -alm). Targets were occasionally inconsistent with expectations. In these inconsistent expectancy trials, targets were either unrelated (e.g., honk-mest, “base-manure”; unexpected unrelated), where the listener was expecting a related target, or related (e.g., nest-pest, “nest-plague”; unexpected related), where the listener was expecting an unrelated target. Participant expectations and phonological relatedness were thus manipulated factorially for three types of phonological overlap (rhyme, one onset phoneme, and three onset phonemes) at three interstimulus intervals (ISIs; 50, 500, and 2,000 msec). Lexical decisions to targets revealed evidence of expectancy-based strategies for all three types of overlap (e.g., faster responses to expected than to unexpected targets, irrespective of phonological relatedness) and evidence of automatic phonological processes, but only for the rhyme and three-phoneme onset overlap conditions and, most strongly, at the shortest ISI (e.g., faster responses to related than to unrelated targets, irrespective of expectations). Although phonological priming thus has both automatic and strategic components, it is possible to cleave them apart.
  • McQueen, J. M., & Mitterer, H. (2005). Lexically-driven perceptual adjustments of vowel categories. In Proceedings of the ISCA Workshop on Plasticity in Speech Perception (PSP2005) (pp. 233-236).
  • McQueen, J. M., Cutler, A., & Norris, D. (2000). Positive and negative influences of the lexicon on phonemic decision-making. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 3 (pp. 778-781). Beijing: China Military Friendship Publish.

    Abstract

    Lexical knowledge influences how human listeners make decisions about speech sounds. Positive lexical effects (faster responses to target sounds in words than in nonwords) are robust across several laboratory tasks, while negative effects (slower responses to targets in more word-like nonwords than in less word-like nonwords) have been found in phonetic decision tasks but not phoneme monitoring tasks. The present experiments tested whether negative lexical effects are therefore a task-specific consequence of the forced choice required in phonetic decision. We compared phoneme monitoring and phonetic decision performance using the same Dutch materials in each task. In both experiments there were positive lexical effects, but no negative lexical effects. We observe that in all studies showing negative lexical effects, the materials were made by cross-splicing, which meant that they contained perceptual evidence supporting the lexically-consistent phonemes. Lexical knowledge seems to influence phonemic decision-making only when there is evidence for the lexically-consistent phoneme in the speech signal.
  • McQueen, J. M., Cutler, A., & Norris, D. (2000). Why Merge really is autonomous and parsimonious. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 47-50). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    We briefly describe the Merge model of phonemic decision-making, and, in the light of general arguments about the possible role of feedback in spoken-word recognition, defend Merge's feedforward structure. Merge not only accounts adequately for the data, without invoking feedback connections, but does so in a parsimonious manner.
  • Meira, S., & Terrill, A. (2005). Contrasting contrastive demonstratives in Tiriyó and Lavukaleve. Linguistics, 43(6), 1131-1152. doi:10.1515/ling.2005.43.6.1131.

    Abstract

    This article explores the contrastive function of demonstratives in two languages, Tiriyó (Cariban, northern Brazil) and Lavukaleve (Papuan isolate, Solomon Islands). The contrastive function has to a large extent been neglected in the theoretical literature on demonstrative functions, although preliminary investigations suggest that there are significant differences in demonstrative use in contrastive versus noncontrastive contexts. Tiriyó and Lavukaleve have what seem at first glance to be rather similar three-term demonstrative systems for exophoric deixis, with a proximal term, a distal term, and a middle term. However, under contrastive usage, significant differences between the two systems become apparent. In presenting an analysis of the contrastive use of demonstratives in these two languages, this article aims to show that the contrastive function is an important parameter of variation in demonstrative systems.
  • Meyer, A. S. (1997). Conceptual influences on grammatical planning units. Language and Cognitive Processes, 12, 859-863. doi:10.1080/016909697386745.
  • Meyer, A. S., Levelt, W. J. M., & Wissink, M. T. (1996). Een modulair model van zinsproductie. Logopedie, 9(2), 21-31.

    Abstract

    In deze bijdrage wordt een modulair model van zinsproductie besproken. De planningsprocessen, die aan de productie van een zin voorafgaan, kunnen in twee hoofdcomponenten onderverdeeld worden: deconceptualisering (het bedenken van de inhoud van de uiting) en de formulering (het vastleggen van de linguïstische vorm). Het formuleringsproces bestaat weer uit twee componenten, te weten de grammatische en fonologische codering. Ook deze componenten bestaan elk weer uit een aantal subcomponenten. Dit artikel beschrijft wat de specifieke taak van iedere component is, hoe deze uitgevoerd wordt en hoe de componenten samenwerken. Tevens worden enkele belangrijke methoden van taalproductie-onderzoek besproken.
  • Meyer, A. S., & Levelt, W. J. M. (2000). Merging speech perception and production [Comment on Norris, McQueen and Cutler]. Behavioral and Brain Sciences, 23(3), 339-340. doi:10.1017/S0140525X00373241.

    Abstract

    A comparison of Merge, a model of comprehension, and WEAVER, a model of production, raises five issues: (1) merging models of comprehension and production necessarily creates feedback; (2) neither model is a comprehensive account of word processing; (3) the models are incomplete in different ways; (4) the models differ in their handling of competition; (5) as opposed to WEAVER, Merge is a model of metalinguistic behavior.
  • Meyer, A. S., & Wheeldon, L. (Eds.). (2005). Language production across the life span. Hove: Psychology Press.

    Abstract

    Most current theories of lexical access in speech production are designed to capture the behaviour of young adults - typically college students. However, young adults represent a minority of the world's speakers. For theories of speech production, the question arises of how the young adults' speech develops out of the quite different speech observed in children and adolescents and how the speech of young adults evolves into the speech observed in older persons. Though a model of adult speech production need not include a detailed account language development, it should be compatible with current knowledge about the development of language across the lifespan. In this sense, theories of young adults' speech production may be constrained by theories and findings concerning the development of language with age. Conversely, any model of language acquisition or language change in older adults should, of course, be compatible with existing theories of the "ideal" speech found in young speakers. For this SpecialIssue we elicited papers on the development of speech production in childhood, adult speech production, and changes in speech production in older adults. The structure of the Special Issue is roughly chronological, focusing in turn on the language production of children (papers by Behrens; Goffman, Heisler & Chakraborty; Vousden & Maylor), young adults (papers by Roelofs; Schiller, Jansma, Peters & Levelt; Finocchiaro & Caramazza; Hartsuiker & Barkhuysen; Bonin, Malardier, Meot & Fayol) and older adults (papers by Mortensen, Meyer & Humphreys; Spieler & Griffin; Altmann & Kemper). We hope that the work compiled here will encourage researchers in any of these areas to consider the theories and findings in the neighbouring fields.
  • Meyer, A. S. (1996). Lexical access in phrase and sentence production: Results from picture-word interference experiments. Journal of Memory and Language, 35, 477-496. doi:doi:10.1006/jmla.1996.0026.

    Abstract

    Four experiments investigated the span of advance planning for phrases and short sentences. Dutch subjects were presented with pairs of objects, which they named using noun-phrase conjunctions (e.g., the translation equivalent of ''the arrow and the bag'') or sentences (''the arrow is next to the bag''). Each display was accompanied by an auditory distracter, which was related in form or meaning to the first or second noun of the utterance or unrelated to both. For sentences and phrases, the mean speech onset time was longer when the distracter was semantically related to the first or second noun and shorter when it was phonologically related to the first noun than when it was unrelated. No phonological facilitation was found for the second noun. This suggests that before utterance onset both target lemmas and the first target form were selected.
  • Meyer, A. S., & Van der Meulen, F. (2000). Phonological priming effects on speech onset latencies and viewing times in object naming. Psychonomic Bulletin & Review, 7, 314-319.
  • Mitterer, H. (2005). Short- and medium-term plasticity for speaker adaptation seem to be independent. In Proceedings of the ISCA Workshop on Plasticity in Speech Perception (PSP2005) (pp. 83-86).
  • Monteiro, M., Rieger, S., Steinmüller, U., & Skiba, R. (1997). Deutsch als Fremdsprache: Fachsprache im Ingenieurstudium. Frankfurt am Main: IKO - Verlag für Interkulturelle Kommunikation.
  • Morgan, J., & Meyer, A. S. (2005). Processing of extrafoveal objects during multiple-object naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 428-442. doi:10.1037/0278-7393.31.3.428.

    Abstract

    In 3 experiments, the authors investigated the extent to which objects that are about to be named are processed prior to fixation. Participants named pairs or triplets of objects. One of the objects, initially seen extrafoveally (the interloper), was replaced by a different object (the target) during the saccade toward it. The interloper-target pairs were identical or unrelated objects or visually and conceptually unrelated objects with homophonous names (e.g., animal-baseball bat). The mean latencies and gaze durations for the targets were shorter in the identity and homophone conditions than in the unrelated condition. This was true when participants viewed a fixation mark until the interloper appeared and when they fixated on another object and prepared to name it while viewing the interloper. These results imply that objects that are about to be named may undergo far-reaching processing, including access to their names, prior to fixation.
  • Moscoso del Prado Martín, F., Deutsch, A., Frost, R., Schreuder, R., De Jong, N. H., & Baayen, R. H. (2005). Changing places: A cross-language perspective on frequency and family size in Dutch and Hebrew. Journal of Memory and Language, 53(4), 496-512. doi:10.1016/j.jml.2005.07.003.

    Abstract

    This study uses the morphological family size effect as a tool for exploring the degree of isomorphism in the networks of morphologically related words in the Hebrew and Dutch mental lexicon. Hebrew and Dutch are genetically unrelated, and they structure their morphologically complex words in very different ways. Two visual lexical decision experiments document substantial cross-language predictivity for the family size measure after partialing out the effect of word frequency and word length. Our data show that the morphological family size effect is not restricted to Indo-European languages but extends to languages with non-concatenative morphology. In Hebrew, a new inhibitory component of the family size effect emerged that arises when a Hebrew root participates in different semantic fields.
  • Naffah, N., Kempen, G., Rohmer, J., Steels, L., Tsichritzis, D., & White, G. (1985). Intelligent Workstation in the office: State of the art and future perspectives. In J. Roukens, & J. Renuart (Eds.), Esprit '84: Status report of ongoing work (pp. 365-378). Amsterdam: Elsevier Science Publishers.
  • Narasimhan, B. (2005). Splitting the notion of 'agent': Case-marking in early child Hindi. Journal of Child Language, 32(4), 787-803. doi:10.1017/S0305000905007117.

    Abstract

    Two construals of agency are evaluated as possible innate biases guiding case-marking in children. A BROAD construal treats agentive arguments of multi-participant and single-participant events as being similar. A NARROWER construal is restricted to agents of multi-participant events. In Hindi, ergative case-marking is associated with agentive participants of multi-participant, perfective actions. Children relying on a broad or narrow construal of agent are predicted to overextend ergative case-marking to agentive participants of transitive imperfective actions and/or intransitive actions. Longitudinal data from three children acquiring Hindi (1;7 to 3;9) reveal no overextension errors, suggesting early sensitivity to distributional patterns in the input.
  • Narasimhan, B., Budwig, N., & Murty, L. (2005). Argument realization in Hindi caregiver-child discourse. Journal of Pragmatics, 37(4), 461-495. doi:10.1016/j.pragma.2004.01.005.

    Abstract

    An influential claim in the child language literature posits that children use structural cues in the input language to acquire verb meaning (Gleitman, 1990). One such cue is the number of arguments co-occurring with the verb, which provides an indication as to the event type associated with the verb (Fisher, 1995). In some languages however (e.g. Hindi), verb arguments are ellipted relatively freely, subject to certain discourse-pragmatic constraints. In this paper, we address three questions: Is the pervasive argument ellipsis characteristic of adult Hindi also found in Hindi-speaking caregivers’ input ? If so, do children consequently make errors in verb transitivity? How early do children learning a split-ergative language, such as Hindi, exhibit sensitivity to discourse-pragmatic influences on argument realization? We show that there is massive argument ellipsis in caregivers’ input to 3–4 year-olds. However, children acquiring Hindi do not make transitivity errors in their own speech. Nor do they elide arguments randomly. Rather, even at this early age, children appear to be sensitive to discourse-pragmatics in their own spontaneous speech production. These findings in a split-ergative language parallel patterns of argument realization found in children acquiring both nominative-accusative languages (e.g. Korean) and ergative-absolutive languages (e.g. Tzeltal, Inuktitut).
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2005). Testing the limits of the semantic illusion phenomenon: ERPs reveal temporary semantic change deafness in discourse comprehension. Cognitive Brain Research, 24(3), 691-701. doi:10.1016/j.cogbrainres.2005.04.003.

    Abstract

    In general, language comprehension is surprisingly reliable. Listeners very rapidly extract meaning from the unfolding speech signal, on a word-by-word basis, and usually successfully. Research on ‘semantic illusions’ however suggests that under certain conditions, people fail to notice that the linguistic input simply doesn't make sense. In the current event-related brain potentials (ERP) study, we examined whether listeners would, under such conditions, spontaneously detect an anomaly in which a human character central to the story at hand (e.g., “a tourist”) was suddenly replaced by an inanimate object (e.g., “a suitcase”). Because this replacement introduced a very powerful coherence break, we expected listeners to immediately notice the anomaly and generate the standard ERP effect associated with incoherent language, the N400 effect. However, instead of the standard N400 effect, anomalous words elicited a positive ERP effect from about 500–600 ms onwards. The absence of an N400 effect suggests that subjects did not immediately notice the anomaly, and that for a few hundred milliseconds the comprehension system has converged on an apparently coherent but factually incorrect interpretation. The presence of the later ERP effect indicates that subjects were processing for comprehension and did ultimately detect the anomaly. Therefore, we take the absence of a regular N400 effect as the online manifestation of a temporary semantic illusion. Our results also show that even attentive listeners sometimes fail to notice a radical change in the nature of a story character, and therefore suggest a case of short-lived ‘semantic change deafness’ in language comprehension.
  • Noordman, L. G., & Vonk, W. (1997). The different functions of a conjunction in constructing a representation of the discourse. In J. Costermans, & M. Fayol (Eds.), Processing interclausal relationships: studies in the production and comprehension of text (pp. 75-94). Mahwah, NJ: Lawrence Erlbaum.
  • Norris, D., McQueen, J. M., & Cutler, A. (2000). Feedback on feedback on feedback: It’s feedforward. (Response to commentators). Behavioral and Brain Sciences, 23, 352-370.

    Abstract

    The central thesis of the target article was that feedback is never necessary in spoken word recognition. The commentaries present no new data and no new theoretical arguments which lead us to revise this position. In this response we begin by clarifying some terminological issues which have lead to a number of significant misunderstandings. We provide some new arguments to support our case that the feedforward model Merge is indeed more parsimonious than the interactive alternatives, and that it provides a more convincing account of the data than alternative models. Finally, we extend the arguments to deal with new issues raised by the commentators such as infant speech perception and neural architecture.
  • Norris, D., McQueen, J. M., & Cutler, A. (2000). Merging information in speech recognition: Feedback is never necessary. Behavioral and Brain Sciences, 23, 299-325.

    Abstract

    Top-down feedback does not benefit speech recognition; on the contrary, it can hinder it. No experimental data imply that feedback loops are required for speech recognition. Feedback is accordingly unnecessary and spoken word recognition is modular. To defend this thesis, we analyse lexical involvement in phonemic decision making. TRACE (McClelland & Elman 1986), a model with feedback from the lexicon to prelexical processes, is unable to account for all the available data on phonemic decision making. The modular Race model (Cutler & Norris 1979) is likewise challenged by some recent results, however. We therefore present a new modular model of phonemic decision making, the Merge model. In Merge, information flows from prelexical processes to the lexicon without feedback. Because phonemic decisions are based on the merging of prelexical and lexical information, Merge correctly predicts lexical involvement in phonemic decisions in both words and nonwords. Computer simulations show how Merge is able to account for the data through a process of competition between lexical hypotheses. We discuss the issue of feedback in other areas of language processing and conclude that modular models are particularly well suited to the problems and constraints of speech recognition.
  • Norris, D., Cutler, A., McQueen, J. M., Butterfield, S., & Kearns, R. K. (2000). Language-universal constraints on the segmentation of English. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 43-46). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Two word-spotting experiments are reported that examine whether the Possible-Word Constraint (PWC) [1] is a language-specific or language-universal strategy for the segmentation of continuous speech. The PWC disfavours parses which leave an impossible residue between the end of a candidate word and a known boundary. The experiments examined cases where the residue was either a CV syllable with a lax vowel, or a CVC syllable with a schwa. Although neither syllable context is a possible word in English, word-spotting in both contexts was easier than with a context consisting of a single consonant. The PWC appears to be language-universal rather than language-specific.
  • Norris, D., & Cutler, A. (1985). Juncture detection. Linguistics, 23, 689-705.
  • Norris, D., Cutler, A., & McQueen, J. M. (2000). The optimal architecture for simulating spoken-word recognition. In C. Davis, T. Van Gelder, & R. Wales (Eds.), Cognitive Science in Australia, 2000: Proceedings of the Fifth Biennial Conference of the Australasian Cognitive Science Society. Adelaide: Causal Productions.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of subcategorical mismatch in word forms. The source of TRACE's failure lay not in interactive connectivity, not in the presence of inter-word competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model, which has inter-word competition, phonemic representations and continuous optimisation (but no interactive connectivity).
  • Norris, D., McQueen, J. M., Cutler, A., & Butterfield, S. (1997). The possible-word constraint in the segmentation of continuous speech. Cognitive Psychology, 34, 191-243. doi:10.1006/cogp.1997.0671.

    Abstract

    We propose that word recognition in continuous speech is subject to constraints on what may constitute a viable word of the language. This Possible-Word Constraint (PWC) reduces activation of candidate words if their recognition would imply word status for adjacent input which could not be a word - for instance, a single consonant. In two word-spotting experiments, listeners found it much harder to detectapple,for example, infapple(where [f] alone would be an impossible word), than invuffapple(wherevuffcould be a word of English). We demonstrate that the PWC can readily be implemented in a competition-based model of continuous speech recognition, as a constraint on the process of competition between candidate words; where a stretch of speech between a candidate word and a (known or likely) word boundary is not a possible word, activation of the candidate word is reduced. This implementation accurately simulates both the present results and data from a range of earlier studies of speech segmentation.
  • O'Shannessy, C. (2005). Light Warlpiri: A new language. Australian Journal of Linguistics, 25(1), 31-57. doi:10.1080/07268600500110472.
  • Otake, T., & Cutler, A. (2000). A set of Japanese word cohorts rated for relative familiarity. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 3 (pp. 766-769). Beijing: China Military Friendship Publish.

    Abstract

    A database is presented of relative familiarity ratings for 24 sets of Japanese words, each set comprising words overlapping in the initial portions. These ratings are useful for the generation of material sets for research in the recognition of spoken words.
  • Otake, T., & Cutler, A. (Eds.). (1996). Phonological structure and language processing: Cross-linguistic studies. Berlin: Mounton de Gruyter.
  • Otake, T., Yoneyama, K., Cutler, A., & van der Lugt, A. (1996). The representation of Japanese moraic nasals. Journal of the Acoustical Society of America, 100, 3831-3842. doi:10.1121/1.417239.

    Abstract

    Nasal consonants in syllabic coda position in Japanese assimilate to the place of articulation of a following consonant. The resulting forms may be perceived as different realizations of a single underlying unit, and indeed the kana orthographies represent them with a single character. In the present study, Japanese listeners' response time to detect nasal consonants was measured. Nasals in coda position, i.e., moraic nasals, were detected faster and more accurately than nonmoraic nasals, as reported in previous studies. The place of articulation with which moraic nasals were realized affected neither response time nor accuracy. Non-native subjects who knew no Japanese, given the same materials with the same instructions, simply failed to respond to moraic nasals which were realized bilabially. When the nasals were cross-spliced across place of articulation contexts the Japanese listeners still showed no significant place of articulation effects, although responses were faster and more accurate to unspliced than to cross-spliced nasals. When asked to detect the phoneme following the (cross-spliced) moraic nasal, Japanese listeners showed effects of mismatch between nasal and context, but non-native listeners did not. Together, these results suggest that Japanese listeners are capable of very rapid abstraction from phonetic realization to a unitary representation of moraic nasals; but they can also use the phonetic realization of a moraic nasal effectively to obtain anticipatory information about following phonemes.
  • Ozyurek, A. (2000). Differences in spatial conceptualization in Turkish and English discourse: Evidence from both speech and gesture. In A. Goksel, & C. Kerslake (Eds.), Studies on Turkish and Turkic languages (pp. 263-272). Wiesbaden: Harrassowitz.
  • Ozyurek, A., Kita, S., Allen, S., Furman, R., & Brown, A. (2005). How does linguistic framing of events influence co-speech gestures? Insights from crosslinguistic variations and similarities. Gesture, 5(1/2), 219-240.

    Abstract

    What are the relations between linguistic encoding and gestural representations of events during online speaking? The few studies that have been conducted on this topic have yielded somewhat incompatible results with regard to whether and how gestural representations of events change with differences in the preferred semantic and syntactic encoding possibilities of languages. Here we provide large scale semantic, syntactic and temporal analyses of speech- gesture pairs that depict 10 different motion events from 20 Turkish and 20 English speakers. We find that the gestural representations of the same events differ across languages when they are encoded by different syntactic frames (i.e., verb-framed or satellite-framed). However, where there are similarities across languages, such as omission of a certain element of the event in the linguistic encoding, gestural representations also look similar and omit the same content. The results are discussed in terms of what gestures reveal about the influence of language specific encoding on on-line thinking patterns and the underlying interactions between speech and gesture during the speaking process.
  • Ozyurek, A. (1996). How children talk about a conversation. Journal of Child Language, 23(3), 693-714. doi:10.1017/S0305000900009004.

    Abstract

    This study investigates how children of different ages talk about a conversation that they have witnessed. 48 Turkish children, five, nine and thirteen years in age, saw a televised dialogue between two Sesame Street characters (Bert and Ernie). Afterward, they narrated what they had seen and heard. Their reports were analysed for the development of linguistic devices used to orient their listeners to the relevant properties of a conversational exchange. Each utterance in the child's narrative was analysed as to its conversational role: (1) whether the child used direct or indirect quotation frames; (2) whether the child marked the boundaries of conversational turns using speakers' names and (3) whether the child used a marker for pairing of utterances made by different speakers (agreement-disagreement, request-refusal, questioning-answering). Within pairings, children's use of (a) the temporal and evaluative connectivity markers and (b) the kind of verb of saying were identified. The data indicate that there is a developmental change in children's ability to use appropriate linguistic means to orient their listeners to the different properties of a conversation. The development and use of these linguistic means enable the child to establish different social roles in a narrative interaction. The findings are interpreted in terms of the child's social-communicative development from being a ' character' to becoming a ' narrator' and ' author' of the reported conversation in the narrative situation.
  • Ozyurek, A., & Ozcaliskan, S. (2000). How do children learn to conflate manner and path in their speech and gestures? Differences in English and Turkish. In E. V. Clark (Ed.), The proceedings of the Thirtieth Child Language Research Forum (pp. 77-85). Stanford: CSLI Publications.
  • Ozyurek, A., & Trabasso, T. (1997). Evaluation during the understanding of narratives. Discourse Processes, 23(3), 305-337. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=hlh&AN=12673020&site=ehost-live.

    Abstract

    Evaluation plays a role in the telling and understanding of narratives, in communicative interaction, emotional understanding, and in psychological well-being. This article reports a study of evaluation by describing how readers monitor the concerns of characters over the course of a narrative. The main hypothesis is that readers tract the well-being via the expression of a character's internal states. Reader evaluations were revealed in think aloud protocols obtained during reading of narrative texts, one sentence at a time. Five kinds of evaluative inferences were found: appraisals (good versus bad), preferences (like versus don't like), emotions (happy versus frustrated), goals (want versus don't want), or purposes (to attain or maintain X versus to prevent or avoid X). Readers evaluated all sentences. The mean rate of evaluation per sentence was 0.55. Positive and negative evaluations over the course of the story indicated that things initially went badly for characters, improved with the formulation and execution of goal plans, declined with goal failure, and improved as characters formulated new goals and succeeded. The kind of evaluation made depended upon the episodic category of the event and the event's temporal location in the story. Evaluations also served to explain or predict events. In making evaluations, readers stayed within the frame of the story and perspectives of the character or narrator. They also moved out of the narrative frame and addressed evaluations towards the experimenter in a communicative context.
  • Ozyurek, A. (2000). The influence of addressee location on spatial language and representational gestures of direction. In D. McNeill (Ed.), Language and gesture (pp. 64-83). Cambridge: Cambridge University Press.
  • Pallier, C., Cutler, A., & Sebastian-Galles, N. (1997). Prosodic structure and phonetic processing: A cross-linguistic study. In Proceedings of EUROSPEECH 97 (pp. 2131-2134). Grenoble, France: ESCA.

    Abstract

    Dutch and Spanish differ in how predictable the stress pattern is as a function of the segmental content: it is correlated with syllable weight in Dutch but not in Spanish. In the present study, two experiments were run to compare the abilities of Dutch and Spanish speakers to separately process segmental and stress information. It was predicted that the Spanish speakers would have more difficulty focusing on the segments and ignoring the stress pattern than the Dutch speakers. The task was a speeded classification task on CVCV syllables, with blocks of trials in which the stress pattern could vary versus blocks in which it was fixed. First, we found interference due to stress variability in both languages, suggesting that the processing of segmental information cannot be performed independently of stress. Second, the effect was larger for Spanish than for Dutch, suggesting that that the degree of interference from stress variation may be partially mitigated by the predictability of stress placement in the language.
  • Patterson, R. D., & Cutler, A. (1989). Auditory preprocessing and recognition of speech. In A. Baddeley, & N. Bernsen (Eds.), Research directions in cognitive science: A european perspective: Vol. 1. Cognitive psychology (pp. 23-60). London: Erlbaum.
  • Pederson, E., & Wilkins, D. (1996). A cross-linguistic questionnaire on 'demonstratives'. In S. C. Levinson (Ed.), Manual for the 1996 Field Season (pp. 1-11). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3003259.

    Abstract

    Demonstrative terms (e.g., this and that) are key items in understanding how a language constructs and interprets spatial relationships. This in-depth questionnaire explores how demonstratives (and similar spatial deixis forms) function in the research language, covering such topics as their morphology and syntax, semantic dimensions, and co-occurring gesture practices. Questionnaire responses should ideally be based on natural, situated discourse as well as elicitation with consultants.
  • Pederson, E., & Senft, G. (1996). Route descriptions: interactive games with Eric's maze task. In S. C. Levinson (Ed.), Manual for the 1996 Field Season (pp. 15-17). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3003287.

    Abstract

    What are the preferred ways to describe spatial relationships in different linguistic and cultural groups, and how does this interact with non-linguistic spatial awareness? This game was devised as an interactive supplement to several items that collect information on the encoding and understanding of spatial relationships, especially as relevant to “route descriptions”. This is a director-matcher task, where one consultant has access to stimulus materials that shows a “target” situation, and directs another consultant (who cannot see the target) to recreate this arrangement.
  • Penke, M., Janssen, U., Indefrey, P., & Seitz, R. (2005). No evidence for a rule/procedural deficit in German patients with Parkinson's disease. Brain and Language, 95(1), 139-140. doi:10.1016/j.bandl.2005.07.078.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1997). A dynamic role of the medial temporal lobe during retrieval of declarative memory in man. NeuroImage, 6, 1-11.

    Abstract

    Understanding the role of the medial temporal lobe (MTL) in learning and memory is an important problem in cognitive neuroscience. Memory and learning processes that depend on the function of the MTL and related diencephalic structures (e.g., the anterior and mediodorsal thalamic nuclei) are defined as declarative. We have studied the MTL activity as indicated by regional cerebral blood flow with positron emission tomography and statistical parametric mapping during recall of abstract designs in a less practiced memory state as well as in a well-practiced (well-encoded) memory state. The results showed an increased activity of the MTL bilaterally (including parahippocampal gyrus extending into hippocampus proper, as well as anterior lingual and anterior fusiform gyri) during retrieval in the less practiced memory state compared to the well-practiced memory state, indicating a dynamic role of the MTL in retrieval during the learning processes. The results also showed that the activation of the MTL decreases as the subjects learn to draw abstract designs from memory, indicating a changing role of the MTL during recall in the earlier stages of acquisition compared to the well-encoded declarative memory state.
  • Petersson, K. M., Grenholm, P., & Forkstam, C. (2005). Artificial grammar learning and neural networks. In G. B. Bruna, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the 27th Annual Conference of the Cognitive Science Society (pp. 1726-1731).

    Abstract

    Recent FMRI studies indicate that language related brain regions are engaged in artificial grammar (AG) processing. In the present study we investigate the Reber grammar by means of formal analysis and network simulations. We outline a new method for describing the network dynamics and propose an approach to grammar extraction based on the state-space dynamics of the network. We conclude that statistical frequency-based and rule-based acquisition procedures can be viewed as complementary perspectives on grammar learning, and more generally, that classical cognitive models can be viewed as a special case of a dynamical systems perspective on information processing
  • Petersson, K. M., Reis, A., Askelöf, S., Castro-Caldas, A., & Ingvar, M. (2000). Language processing modulated by literacy: A network analysis of verbal repetition in literate and illiterate subjects. Journal of Cognitive Neuroscience, 12(3), 364-382. doi:10.1162/089892900562147.
  • Petersson, K. M. (2005). On the relevance of the neurobiological analogue of the finite-state architecture. Neurocomputing, 65(66), 825-832. doi:10.1016/j.neucom.2004.10.108.

    Abstract

    We present two simple arguments for the potential relevance of a neurobiological analogue of the finite-state architecture. The first assumes the classical cognitive framework, is wellknown, and is based on the assumption that the brain is finite with respect to its memory organization. The second is formulated within a general dynamical systems framework and is based on the assumption that the brain sustains some level of noise and/or does not utilize infinite precision processing. We briefly review the classical cognitive framework based on Church–Turing computability and non-classical approaches based on analog processing in dynamical systems. We conclude that the dynamical neurobiological analogue of the finitestate architecture appears to be relevant, at least at an implementational level, for cognitive brain systems
  • Petrovic, P., Petersson, K. M., Ghatan, P., Stone-Elander, S., & Ingvar, M. (2000). Pain related cerebral activation is altered by a distracting cognitive task. Pain, 85, 19-30.

    Abstract

    It has previously been suggested that the activity in sensory regions of the brain can be modulated by attentional mechanisms during parallel cognitive processing. To investigate whether such attention-related modulations are present in the processing of pain, the regional cerebral blood ¯ow was measured using [15O]butanol and positron emission tomography in conditions involving both pain and parallel cognitive demands. The painful stimulus consisted of the standard cold pressor test and the cognitive task was a computerised perceptual maze test. The activations during the maze test reproduced findings in previous studies of the same cognitive task. The cold pressor test evoked signi®cant activity in the contralateral S1, and bilaterally in the somatosensory association areas (including S2), the ACC and the mid-insula. The activity in the somatosensory association areas and periaqueductal gray/midbrain were significantly modified, i.e. relatively decreased, when the subjects also were performing the maze task. The altered activity was accompanied with significantly lower ratings of pain during the cognitive task. In contrast, lateral orbitofrontal regions showed a relative increase of activity during pain combined with the maze task as compared to only pain, which suggests the possibility of the involvement of frontal cortex in modulation of regions processing pain
  • Pijls, F., Daelemans, W., & Kempen, G. (1987). Artificial intelligence tools for grammar and spelling instruction. Instructional Science, 16(4), 319-336. doi:10.1007/BF00117750.

    Abstract

    In The Netherlands, grammar teaching is an especially important subject in the curriculum of children aged 10-15 for several reasons. However, in spite of all attention and time invested, the results are poor. This article describes the problems and our attempt to overcome them by developing an intelligent computational instructional environment consisting of: a linguistic expert system, containing a module representing grammar and spelling rules and a number of modules to manipulate these rules; a didactic module; and a student interface with special facilities for grammar and spelling. Three prototypes of the functionality are discussed: BOUWSTEEN and COGO, which are programs for constructing and analyzing Dutch sentences; and TDTDT, a program for the conjugation of Dutch verbs.
  • Pijls, F., & Kempen, G. (1987). Kennistechnologische leermiddelen in het grammatica- en spellingonderwijs. Nederlands Tijdschrift voor de Psychologie, 42, 354-363.
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1996). Observational and checklist measures of vocabulary composition: What do they mean? Journal of Child Language, 23(3), 573-590. doi:10.1017/S0305000900008953.

    Abstract

    Observational and checklist measures of vocabulary composition have both recently been used to look at the absolute proportion of nouns in children's early vocabularies. However, they have tended to generate rather different results. The present study is an attempt to investigate the relationship between such measures in a sample of 26 children between 1;1 and 2;1 at approximately 50 and 100 words. The results show that although observational and checklist measures are significantly correlated, there are also systematic quantitative differences between them which seem to reflect a combination of checklist, maternal-report and observational sampling biases. This suggests that, although both kinds of measure may represent good indices of differences in vocabulary size and composition across children and hence be useful as dependent variables in correlational research, neither may be ideal for estimating the absolute proportion of nouns in children's vocabularies. The implication is that questions which rely on information about the absolute proportion of particular kinds of words in children's vocabularies can only be properly addressed by detailed longitudinal studies in which an attempt is made to collect more comprehensive vocabulary records for individual children.
  • Pine, J. M., Rowland, C. F., Lieven, E. V., & Theakston, A. L. (2005). Testing the Agreement/Tense Omission Model: Why the data on children's use of non-nominative 3psg subjects count against the ATOM. Journal of Child Language, 32(2), 269-289. doi:10.1017/S0305000905006860.

    Abstract

    One of the most influential recent accounts of pronoun case-marking errors in young children's speech is Schütze & Wexler's (1996) Agreement/Tense Omission Model (ATOM). The ATOM predicts that the rate of agreeing verbs with non-nominative subjects will be so low that such errors can be reasonably disregarded as noise in the data. The present study tests this prediction on data from 12 children between the ages of 1;8.22 and 3;0.10. This is done, first, by identifying children who produced a reasonably large number of non-nominative 3psg subjects; second, by estimating the expected rate of agreeing verbs with masculine and feminine non-nominative subjects in these children's speech; and, third, by examining the actual rate at which agreeing verb forms occurred with non-nominative subjects in those areas of the data in which the expected error rate was significantly greater than 10%. The results show, first, that only three of the children produced enough non-nominative subjects to allow a reasonable test of the ATOM to be made; second, that for all three of these children, the only area of the data in which the expected frequency of agreeing verbs with non-nominative subjects was significantly greater than 10% was their use of feminine case-marked subjects; and third, that for all three of these children, the rate of agreeing verbs with non-nominative feminine subjects was over 30%. These results raise serious doubts about the claim that children's use of non-nominative subjects can be explained in terms of AGR optionality, and suggest the need for a model of pronoun case-marking error that can explain why some children produce agreeing verb forms with non-nominative subjects as often as they do.
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1997). Stylistic variation at the “single-word” stage: Relations between maternal speech characteristics and children's vocabulary composition and usage. Child Development, 68(5), 807-819. doi:10.1111/j.1467-8624.1997.tb01963.x.

    Abstract

    In this study we test a number of different claims about the nature of stylistic variation at the “single-word” stage by examining the relation between variation in early vocabulary composition, variation in early language use, and variation in the structural and functional propreties of mothers' child-directed speech. Maternal-report and observational data were collected for 26 children at 10, 50, and 100 words, These were then correlated with a variety of different measures of maternal speech at 10 words, The results show substantial variation in the percentage of common nouns and unanalyzed phrases in children's vocabularies, and singficant relations between this variation and the way in which language is used by the child. They also reveal singficant relations between the way in whch mothers use language at 10 words and the way in chich their children use language at 50 words and between certain formal properties of mothers speech at 10 words and the percentage of common nouns and unanalyzed phrases in children's early vocabularies, However, most of these relations desappear when an attempt is made to control for ossible effects of the child on the mother at Time 1. The exception is a singficant negative correlation between mothers tendency to produce speech that illustrates word boundaries and the percentage of unanalyzed phrases at 50 and 100 words. This suggests that mothers whose sprech provides the child with information about where new words begin and end tend to have children with few unanalyzed. phrases in their early vocabularies.
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005). Articulatory planning is continuous and sensitive to informational redundancy. Phonetica, 62(2-4), 146-159. doi:10.1159/000090095.

    Abstract

    This study investigates the relationship between word repetition, predictability from neighbouring words, and articulatory reduction in Dutch. For the seven most frequent words ending in the adjectival suffix -lijk, 40 occurrences were randomly selected from a large database of face-to-face conversations. Analysis of the selected tokens showed that the degree of articulatory reduction (as measured by duration and number of realized segments) was affected by repetition, predictability from the previous word and predictability from the following word. Interestingly, not all of these effects were significant across morphemes and target words. Repetition effects were limited to suffixes, while effects of predictability from the previous word were restricted to the stems of two of the seven target words. Predictability from the following word affected the stems of all target words equally, but not all suffixes. The implications of these findings for models of speech production are discussed.
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005). Lexical frequency and acoustic reduction in spoken Dutch. Journal of the Acoustical Society of America, 118(4), 2561-2569. doi:10.1121/1.2011150.

    Abstract

    This study investigates the effects of lexical frequency on the durational reduction of morphologically complex words in spoken Dutch. The hypothesis that high-frequency words are more reduced than low-frequency words was tested by comparing the durations of affixes occurring in different carrier words. Four Dutch affixes were investigated, each occurring in a large number of words with different frequencies. The materials came from a large database of face-to-face conversations. For each word containing a target affix, one token was randomly selected for acoustic analysis. Measurements were made of the duration of the affix as a whole and the durations of the individual segments in the affix. For three of the four affixes, a higher frequency of the carrier word led to shorter realizations of the affix as a whole, individual segments in the affix, or both. Other relevant factors were the sex and age of the speaker, segmental context, and speech rate. To accommodate for these findings, models of speech production should allow word frequency to affect the acoustic realizations of lower-level units, such as individual speech sounds occurring in affixes.
  • Poletiek, F. H. (2000). De beoordelaar dobbelt niet - denkt hij. Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden, 55(5), 246-249.
  • Poletiek, F. H. (1997). De wet 'bijzondere opnemingen in psychiatrische ziekenhuizen' aan de cijfers getoetst. Maandblad voor Geestelijke Volksgezondheid, 4, 349-361.
  • Poletiek, F. H., & Berndsen, M. (2000). Hypothesis testing as risk behaviour with regard to beliefs. Journal of Behavioral Decision Making, 13(1), 107-123. doi:10.1002/(SICI)1099-0771(200001/03)13:1<107:AID-BDM349>3.0.CO;2-P.

    Abstract

    In this paper hypothesis‐testing behaviour is compared to risk‐taking behaviour. It is proposed that choosing a suitable test for a given hypothesis requires making a preposterior analysis of two aspects of such a test: the probability of obtaining supporting evidence and the evidential value of this evidence. This consideration resembles the one a gambler makes when choosing among bets, each having a probability of winning and an amount to be won. A confirmatory testing strategy can be defined within this framework as a strategy directed at maximizing either the probability or the value of a confirming outcome. Previous theories on testing behaviour have focused on the human tendency to maximize the probability of a confirming outcome. In this paper, two experiments are presented in which participants tend to maximize the confirming value of the test outcome. Motivational factors enhance this tendency dependent on the context of the testing situation. Both this result and the framework are discussed in relation to other studies in the field of testing behaviour.
  • Poletiek, F. H. (in preparation). Inside the juror: The psychology of juror decision-making [Bespreking van De geest van de jury (1997)].
  • Poletiek, F. H., & Rassin E. (Eds.). (2005). Het (on)bewuste [Special Issue]. De Psycholoog.
  • Poletiek, F. H., & Van den Bos, E. J. (2005). Het onbewuste is een dader met een motief. De Psycholoog, 40(1), 11-17.
  • Poletiek, F. H. (1996). Paradoxes of falsification. Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology, 49(2), 447-462. doi:10.1080/713755628.
  • Poletiek, F. H. (2005). The proof of the pudding is in the eating: Translating Popper's philosophy into a model for testing behaviour. In K. I. Manktelow, & M. C. Chung (Eds.), Psychology of reasoning: Theoretical and historical perspectives (pp. 333-347). Hove: Psychology Press.
  • Praamstra, P., Meyer, A. S., Cools, A. R., Horstink, M. W. I. M., & Stegeman, D. F. (1996). Movement preparation in Parkinson's disease: Time course and distribution of movement-related potentials in a movement precueing task. Brain, 119, 1689-1704. doi:10.1093/brain/119.5.1689.

    Abstract

    Investigations of the effects of advance information on movement preparation in Parkinson's disease using reaction time (RT) measures have yielded contradictory results. In order to obtain direct information regarding the time course of movement preparation, we combined RT measurements in a movement precueing task with multi-channel recordings of movement-related potentials in the present study. Movements of the index and middle fingers of the left and right hand were either precued or not by advance information regarding the side (left or right hand) of the required response. Reaction times were slower for patients than for control subjects. Both groups benefited equally from informative precues, indicating that patients utilized the advance information as effectively as control subjects. Lateralization of the movement-preceding cerebral activity [i.e. the lateralized readiness potential (LRP)] confirmed that patients used the available partial information to prepare their responses and started this process no later than controls. In conjunction with EMG onset times, the LRP onset measures allowed for a fractionation of the RTs, which provided clues to the stages where the slowness of Parkinson's disease patients might arise. No definite abnormalities of temporal parameters were found, but differences in the distribution of the lateralized movement-preceding activity between patients and controls suggested differences in the cortical organization of movement preparation. Differences in amplitude of the contingent negative variation (CNV) and differences in the way in which the CNV was modulated by the information given by the precue pointed in the same direction. A difference in amplitude of the P300 between patients and controls suggested that preprogramming a response required more effort from. patients than from control subjects.
  • Radeau, M., & Van Berkum, J. J. A. (1996). Gender decision. Language and Cognitive Processes, 11(6), 605-610. doi:10.1080/016909696387006.

    Abstract

    In languages in which nouns have a grammatical gender, word recognition can be estimated by gender decision response times. Although gender decision has yet to be used extensively, it has proved sensitive to several factors that have been shown to affect lexical access. The task is not restricted to spoken language but can be used with linguistic information from other sensory modalities.
  • Rey, A., & Schiller, N. O. (2005). Graphemic complexity and multiple print-to-sound associations in visual word recognition. Memory & Cognition, 33(1), 76-85.

    Abstract

    It has recently been reported that words containing a multiletter grapheme are processed slower than are words composed of single-letter graphemes (Rastle & Coltheart, 1998; Rey, Jacobs, Schmidt-Weigand, & Ziegler, 1998). In the present study, using a perceptual identification task, we found in Experiment 1 that this graphemic complexity effect can be observed while controlling for multiple print-to-sound associations, indexed by regularity or consistency. In Experiment 2, we obtained cumulative effects of graphemic complexity and regularity. These effects were replicated in Experiment 3 in a naming task. Overall, these results indicate that graphemic complexity and multiple print-to-sound associations effects are independent and should be accounted for in different ways by models of written word processing.
  • Roelofs, A. (2005). Spoken word planning, comprehending, and self-monitoring: Evaluation of WEAVER++. In R. Hartsuiker, R. Bastiaanse, A. Postma, & F. Wijnen (Eds.), Phonological encoding and monitoring in normal and pathological speech (pp. 42-63). Hove: Psychology press.
  • Roelofs, A. (2005). The visual-auditory color-word Stroop asymmetry and its time course. Memory & Cognition, 33(8), 1325-1336.

    Abstract

    Four experiments examined crossmodal versions of the Stroop task in order (1) to look for Stroop asymmetries in color naming, spoken-word naming, and written-word naming and to evaluate the time course of these asymmetries, and (2) to compare these findings to current models of the Stroop effect. Participants named color patches while ignoring spoken color words presented with an onset varying from 300 msec before to 300 msec after the onset of the color (Experiment 1), or they named the spoken words and ignored the colors (Experiment 2). A secondary visual detection task assured that the participants looked at the colors in both tasks. Spoken color words yielded Stroop effects in color naming, but colors did not yield an effect in spoken-word naming at any stimulus onset asynchrony. This asymmetry in effects was obtained with equivalent color- and spoken-word-naming latencies. Written color words yielded a Stroop effect in naming spoken words (Experiment 3), and spoken color words yielded an effect in naming written words (Experiment 4). These results were interpreted as most consistent with an architectural account of the color-word Stroop asymmetry, in contrast with discriminability and pathway strength accounts.
  • Roelofs, A. (2005). From Popper to Lakatos: A case for cumulative computational modeling. In A. Cutler (Ed.), Twenty-first century psycholinguistics: Four cornerstones (pp. 313-330). Mahwah,NJ: Erlbaum.
  • Roelofs, A., Meyer, A. S., & Levelt, W. J. M. (1996). Interaction between semantic and orthographic factors in conceptually driven naming: Comment on Starreveld and La Heij (1995). Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 246-251.

    Abstract

    P. A. Starreveld and W. La Heij (1995) tested the seriality view of lexical access in speech production, according to which lexical selection and the encoding of a word's form proceed in serial order without feedback. In 2 experiments, they looked at the combined effect of semantic and orthographic relatedness of written distracter words in tasks that required conceptually driven naming. They found an interaction between semantic relatedness and orthographic relatedness and argued that the observed interaction refutes the seriality view of lexical access. In this comment, the authors argue that Starreveld and La Heij's rejection of serial access was based on an oversimplified conception of the seriality view and that interaction, rather than additivity, is predicted by existing conceptions of serial access.
  • Roelofs, A. (1997). The WEAVER model of word-form encoding in speech production. Cognition, 64, 249-284. doi:10.1016/S0010-0277(97)00027-9.

    Abstract

    Lexical access in speaking consists of two major steps: lemma retrieval and word-form encoding. In Roelofs (Roelofs, A. 1992a. Cognition 42. 107-142; Roelofs. A. 1993. Cognition 47, 59-87.), I described a model of lemma retrieval. The present paper extends this work by presenting a comprehensive model of the second access step, word-form encoding. The model is called WEAVER (Word-form Encoding by Activation and VERification). Unlike other models of word-form generation, WEAVER is able to provide accounts of response time data, particularly from the picture-word interference paradigm and the implicit priming paradigm. Its key features are (1) retrieval by spreading activation, (2) verification of activated information by a production rule, (3) a rightward incremental construction of phonological representations using a principle of active syllabification, syllables are constructed on the fly rather than stored with lexical items, (4) active competitive selection of syllabic motor programs using a mathematical formalism that generates response times and (5) the association of phonological speech errors with the selection of syllabic motor programs due to the failure of verification.
  • Rösler, D., & Skiba, R. (1987). Eine Datenbank für den Sprachunterricht: Ein Lehrmaterial-Steinbruch für Deutsch als Zweitsprache. Mainz: Werkmeister.
  • Rowland, C. F., & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: what children do know? Journal of Child Language, 27(1), 157-181.

    Abstract

    The present paper reports an analysis of correct wh-question production and subject–auxiliary inversion errors in one child's early wh-question data (age 2; 3.4 to 4; 10.23). It is argued that two current movement rule accounts (DeVilliers, 1991; Valian, Lasser & Mandelbaum, 1992) cannot explain the patterning of early wh-questions. However, the data can be explained in terms of the child's knowledge of particular lexically-specific wh-word+auxiliary combinations, and the pattern of inversion and uninversion predicted from the relative frequencies of these combinations in the mother's speech. The results support the claim that correctly inverted wh-questions can be produced without access to a subject–auxiliary inversion rule and are consistent with the constructivist claim that a distributional learning mechanism that learns and reproduces lexically-specific formulae heard in the input can explain much of the early multi-word speech data. The implications of these results for movement rule-based and constructivist theories of grammatical development are discussed.
  • Rowland, C. F. (2000). The grammatical acquisition of wh-questions in early English multi-word speech. PhD Thesis, University of Nottingham, UK.

    Abstract

    Recent studies of wh-question acquisition have tended to come from the nativist side of the language acquisition debate with little input from a constructivist perspective. The present work was designed to redress the balance, first by presenting a detailed description of young children's wh-question acquisition data, second, by providing detailed critiques of two nativist theories of wh- question acquisition, and third, by presenting a preliminary account of young children's wh-question development from a constructivist perspective. Analyses of the data from twelve 2 to 3 year old children collected over a year and of data from an older child (Adam from the Brown corpus, 1973) are described and three conclusions are drawn. First it is argued that the data suggest that children's knowledge of how to form wh-questions builds up gradually as they learn how to combine lexical items such as wh-words and auxiliaries in specific ways. Second, it is concluded that two nativist theories of grammatical development (Radford, 1990, 1992, 1995, 1996, Valian, Lasser & Mandelbaum, 1992) fail to account successfully for the wh-question data produced by the children. Third, it is asserted that the lexically-specific nature of children's early wh-questions is compatible with a lexical constructivist view of development, which proposes that the language learning mechanism learns by picking up high frequency lexical patterns from the input. The implications of these conclusions for theories of language development and future research are discussed.
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2005). The incidence of error in young children's wh-questions. Journal of Speech, Language, and Hearing Research, 48, 384-404. doi:10.1044/1092-4388(2005/027).

    Abstract

    Many current generativist theorists suggest that young children possess the grammatical principles of inversion required for question formation but make errors because they find it difficult to learn language-specific rules about how inversion applies. The present study analyzed longitudinal spontaneous sampled data from twelve 2–3-year-old English speaking children and the intensive diary data of 1 child (age 2;7 [years;months] to 2;11) in order to test some of these theories. The results indicated significantly different rates of error use across different auxiliaries. In particular, error rates differed across 2 forms of the same auxiliary subtype (e.g., auxiliary is vs. are), and auxiliary DO and modal auxiliaries attracted significantly higher rates of errors of inversion than other auxiliaries. The authors concluded that current generativist theories might have problems explaining the patterning of errors seen in children's questions, which might be more consistent with a constructivist account of development. However, constructivists need to devise more precise predictions in order to fully explain the acquisition of questions.
  • De Ruiter, J. P., & Wilkins, D. (Eds.). (1996). Max Planck Institute for Psycholinguistics: Annual report 1996. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Salverda, A. P. (2005). Prosodically-conditioned detail in the recognition of spoken words. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.57311.

    Abstract

    The research presented in this dissertation examined the influence of prosodically-conditioned detail on the recognition of spoken words. The main finding is that subphonemic information in the speech signal that is conditioned by constituent-level prosodic structure can affect lexical processing systematically. It was shown that such information, as indicated by and estimated from the lengthening of speech sounds in the vicinity of prosodic boundaries, can help listeners to distinguish onset-embedded words (e.g. 'ham') from longer words that have this word embedded at their onset (e.g. 'hamster'). Furthermore, it was shown that variation in the realization of a spoken word that is associated with its position in the prosodic structure of an utterance can effect lexical processing. The pattern of competitor activation associated with the recognition of a monosyllabic spoken word in utterance-final position, where the realization of the word is strongly affected by the utterance boundary, is different from that associated with the recognition of the same word in utterance-medial position, where the realization of the word is less strongly affected by the following prosodic-word boundary. Taken together, the findings attest to the extraordinary sensitivity of the spoken-word recogntion system by demonstrating the relevance for lexical processing of very fine-grained phonetic detail conditioned by prosodic structure.

    Additional information

    full text via Radboud Repository
  • Sandberg, A., Lansner, A., Petersson, K. M., & Ekeberg, Ö. (2000). A palimpsest memory based on an incremental Bayesian learning rule. Neurocomputing, 32(33), 987-994. doi:10.1016/S0925-2312(00)00270-8.

    Abstract

    Capacity limited memory systems need to gradually forget old information in order to avoid catastrophic forgetting where all stored information is lost. This can be achieved by allowing new information to overwrite old, as in the so-called palimpsest memory. This paper describes a new such learning rule employed in an attractor neural network. The network does not exhibit catastrophic forgetting, has a capacity dependent on the learning time constant and exhibits recency e!ects in retrieval
  • Sandberg, A., Lansner, A., Petersson, K. M., & Ekeberg, Ö. (2000). A palimpsest memory based on an incremental Bayesian learning rule. In J. M. Bower (Ed.), Computational Neuroscience: Trends in Research 2000 (pp. 987-994). Amsterdam: Elsevier.
  • Sauter, D., Wiland, J., Warren, J., Eisner, F., Calder, A., & Scott, S. K. (2005). Sounds of joy: An investigation of vocal expressions of positive emotions [Abstract]. Journal of Cognitive Neuroscience, 61(Supplement), B99.

    Abstract

    A series of experiment tested Ekman’s (1992) hypothesis that there are a set of positive basic emotions that are expressed using vocal para-linguistic sounds, e.g. laughter and cheers. The proposed categories investigated were amusement, contentment, pleasure, relief and triumph. Behavioural testing using a forced-choice task indicated that participants were able to reliably recognize vocal expressions of the proposed emotions. A cross-cultural study in the preliterate Himba culture in Namibia confirmed that these categories are also recognized across cultures. A recognition test of acoustically manipulated emotional vocalizations established that the recognition of different emotions utilizes different vocal cues, and that these in turn differ from the cues used when comprehending speech. In a study using fMRI we found that relative to a signal correlated noise baseline, the paralinguistic expressions of emotion activated bilateral superior temporal gyri and sulci, lateral and anterior to primary auditory cortex, which is consistent with the processing of non linguistic vocal cues in the auditory ‘what’ pathway. Notably amusement was associated with greater activation extending into both temporal poles and amygdale and insular cortex. Overall, these results support the claim that ‘happiness’ can be fractionated into amusement, pleasure, relief and triumph.
  • Scharenborg, O., & Seneff, S. (2005). A two-pass strategy for handling OOVs in a large vocabulary recognition task. In Interspeech'2005 - Eurospeech, 9th European Conference on Speech Communication and Technology, (pp. 1669-1672). ISCA Archive.

    Abstract

    This paper addresses the issue of large-vocabulary recognition in a specific word class. We propose a two-pass strategy in which only major cities are explicitly represented in the first stage lexicon. An unknown word model encoded as a phone loop is used to detect OOV city names (referred to as rare city names). After which SpeM, a tool that can extract words and word-initial cohorts from phone graphs on the basis of a large fallback lexicon, provides an N-best list of promising city names on the basis of the phone sequences generated in the first stage. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances each containing one rare city name. We tested the size of the N-best list and three types of language models (LMs). The experiments showed that SpeM was able to include nearly 85% of the correct city names into an N-best list of 3000 city names when a unigram LM, which also boosted the unigram scores of a city name in a given state, was used.
  • Scharenborg, O., Bouwman, G., & Boves, L. (2000). Connected digit recognition with class specific word models. In Proceedings of the COST249 Workshop on Voice Operated Telecom Services workshop (pp. 71-74).

    Abstract

    This work focuses on efficient use of the training material by selecting the optimal set of model topologies. We do this by training multiple word models of each word class, based on a subclassification according to a priori knowledge of the training material. We will examine classification criteria with respect to duration of the word, gender of the speaker, position of the word in the utterance, pauses in the vicinity of the word, and combinations of these. Comparative experiments were carried out on a corpus consisting of Dutch spoken connected digit strings and isolated digits, which are recorded in a wide variety of acoustic conditions. The results show, that classification based on gender of the speaker, position of the digit in the string, pauses in the vicinity of the training tokens, and models based on a combination of these criteria perform significantly better than the set with single models per digit.
  • Scharenborg, O., Norris, D., Ten Bosch, L., & McQueen, J. M. (2005). How should a speech recognizer work? Cognitive Science, 29(6), 867-918. doi:10.1207/s15516709cog0000_37.

    Abstract

    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input.
  • Scharenborg, O. (2005). Narrowing the gap between automatic and human word recognition. PhD Thesis, [S.l.: s.n.].

    Abstract

    RU Radboud Universiteit Nijmegen, 16 september 2005
  • Scharenborg, O. (2005). Parallels between HSR and ASR: How ASR can contribute to HSR. In Interspeech'2005 - Eurospeech, 9th European Conference on Speech Communication and Technology (pp. 1237-1240). ISCA Archive.

    Abstract

    In this paper, we illustrate the close parallels between the research fields of human speech recognition (HSR) and automatic speech recognition (ASR) using a computational model of human word recognition, SpeM, which was built using techniques from ASR. We show that ASR has proven to be useful for improving models of HSR by relieving them of some of their shortcomings. However, in order to build an integrated computational model of all aspects of HSR, a lot of issues remain to be resolved. In this process, ASR algorithms and techniques definitely can play an important role.
  • Schiller, N. O. (1997). The role of the syllable in speech production: Evidence from lexical statistics, metalinguistics, masked priming, and electromagnetic midsagittal articulography. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057707.
  • Schiller, N. O. (2005). Verbal self-monitoring. In A. Cutler (Ed.), Twenty-first Century Psycholinguistics: Four cornerstones (pp. 245-261). Lawrence Erlbaum: Mahwah [etc.].
  • Schiller, N. O., Meyer, A. S., Baayen, R. H., & Levelt, W. J. M. (1996). A comparison of lexeme and speech syllables in Dutch. Journal of Quantitative Linguistics, 3(1), 8-28.

    Abstract

    The CELEX lexical database includes a list of Dutch syllables and their frequencies, based on syllabification of isolated word forms. In connected speech, however, sentence-level phonological rules can modify the syllables and their token frequencies. In order to estimate the changes syllables may undergo in connected speech, an empirical investigation was carried out. A large Dutch text corpus (TROUW) was transcribed, processed by word level rules, and syllabified. The resulting lexeme syllables were evaluated by comparing them to the CELEX lexical database for Dutch. Then additional phonological sentence-level rules were applied to the TROUW corpus, and the frequencies of the resulting connected speech syllables were compared with those of the lexeme syllables from TROUW. The overall correlation between lexeme and speech syllables was very high. However, speech syllables generally had more complex CV structures than lexeme syllables. Implications of the results for research involving syllables are discussed. With respect to the notion of a mental syllabary (a store for precompiled articulatory programs for syllables, see Levelt & Wheeldon, 1994) this study revealed an interesting statistical result. The calculation of the cumulative syllable frequencies showed that 85% of the syllable tokens in Dutch can be covered by the 500 most frequent syllable types, which makes the idea of a syllabary very attractive.
  • Schiller, N. O., Van Lieshout, P. H. H. M., Meyer, A. S., & Levelt, W. J. M. (1997). Is the syllable an articulatory unit in speech production? Evidence from an Emma study. In P. Wille (Ed.), Fortschritte der Akustik: Plenarvorträge und Fachbeiträge der 23. Deutschen Jahrestagung für Akustik (DAGA 97) (pp. 605-606). Oldenburg: DEGA.
  • Schiller, N. O., & Köster, O. (1996). Evaluation of a foreign speaker in forensic phonetics: A report. Forensic Linguistics: The international Journal of Speech, Language and the Law, 3, 176-185.
  • Schiller, N. O., Meyer, A. S., & Levelt, W. J. M. (1997). The syllabic structure of spoken words: Evidence from the syllabification of intervocalic consonants. Language and Speech, 40(2), 103-140.

    Abstract

    A series of experiments was carried out to investigate the syllable affiliation of intervocalic consonants following short vowels, long vowels, and schwa in Dutch. Special interest was paid to words such as letter ['leter] ''id.,'' where a short vowel is followed by a single consonant. On phonological grounds one may predict that the first syllable should always be closed, but earlier psycholinguistic research had shown that speakers tend to leave these syllables open. In our experiments, bisyllabic word forms were presented aurally, and participants produced their syllables in reversed order (Experiments 1 through 5), or repeated the words inserting a pause between the syllables (Experiment 6). The results showed that participants generally closed syllables with a short vowel. However, in a significant number of the cases they produced open short vowel syllables. Syllables containing schwa, like syllables with a long vowel, were hardly ever closed. Word stress, the phonetic quality of the vowel in the first syllable, and the experimental context influenced syllabification. Taken together, the experiments show that native speakers syllabify bisyllabic Dutch nouns in accordance with a small set of prosodic output constraints. To account for the variability of the results, we propose that these constraints differ in their probabilities of being applied.
  • Schmitt, B. M. (1997). Lexical access in the production of ellipsis and pronouns. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057702.
  • Schoffelen, J.-M., Oostenveld, R., & Fries, P. (2005). Neuronal coherence as a mechanism of effective corticospinal interaction. Science, 308, 111-113. doi:10.1126/science.1107027.

    Abstract

    Neuronal groups can interact with each other even if they are widely separated. One group might modulate its firing rate or its internal oscillatory synchronization to influence another group. We propose that coherence between two neuronal groups is a mechanism of efficient interaction, because it renders mutual input optimally timed and thereby maximally effective. Modulations of subjects' readiness to respond in a simple reaction-time task were closely correlated with the strength of gamma-band (40 to 70 hertz) coherence between motor cortex and spinal cord neurons. This coherence may contribute to an effective corticospinal interaction and shortened reaction times.

Share this page