Publications

Displaying 201 - 217 of 217
  • Shipley, J. M., Birdsall, S., Clark, J., Crew, J., Gill, S., Linehan, M., Gnarra, J., Fisher, S. E., Craig, I. W., & Cooper, C. S. (1995). Mapping the X chromosome breakpoint in two papillary renal cell carcinoma cell lines with a t(X;1)(p11.2;q21.2) and the first report of a female case. Cytogenetic and genome research, 71(3), 280-284. doi:DOI: 10.1159/000134127.

    Abstract

    A t(X;1)(p11.2;q21.2) has been reported in cases of papillary renal cell tumors arising in males. In this study two cell lines derived from this tumor type have been used to indicate the breakpoint region on the X chromosome. Both cell lines have the translocation in addition to other rearrangements and one is derived from the first female case to be reported with the t(X;1)(p11.2;q21.2). Fluorescence in situ hybridization (FISH) has been used to position YACs belonging to contigs in the Xp11.2 region relative to the breakpoint. When considered together with detailed mapping information from the Xp11.2 region the position of the breakpoint in both cell lines was suggested as follows: Xpter-->Xp11.23-OATL1-GATA1-WAS-TFE3-SY P-t(X;1)-DXS255-CLCN5-DXS146-OATL2- Xp11.22-->Xcen. The breakpoint was determined to lie in an uncloned region between SYP and a YAC called FTDM/1 which extends 1 Mb distal to DXS255. These results are contrary to the conclusion from previous FISH studies that the breakpoint was near the OATL2 locus, but are consistent with, and considerably refine, the position that had been established by molecular analysis.
  • Smits, R. (2000). Temporal distribution of information for human consonant recognition in VCV utterances. Journal of Phonetics, 28, 111-135. doi:10.006/jpho.2000.0107.

    Abstract

    The temporal distribution of perceptually relevant information for consonant recognition in British English VCVs is investigated. The information distribution in the vicinity of consonantal closure and release was measured by presenting initial and final portions, respectively, of naturally produced VCV utterances to listeners for categorization. A multidimensional scaling analysis of the results provided highly interpretable, four-dimensional geometrical representations of the confusion patterns in the categorization data. In addition, transmitted information as a function of truncation point was calculated for the features manner place and voicing. The effects of speaker, vowel context, stress, and distinctive feature on the resulting information distributions were tested statistically. It was found that, although all factors are significant, the location and spread of the distributions depends principally on the distinctive feature, i.e., the temporal distribution of perceptually relevant information is very different for the features manner, place, and voicing.
  • Swaab, T., Brown, C. M., & Hagoort, P. (1995). Delayed integration of lexical ambiguities in Broca's aphasics: Evidence from event-related potentials. Brain and Language, 51, 159-161. doi:10.1006/brln.1995.1058.
  • Swingley, D., & Aslin, R. N. (2000). Spoken word recognition and lexical representation in very young children. Cognition, 76, 147-166. doi:10.1016/S0010-0277(00)00081-0.

    Abstract

    Although children's knowledge of the sound patterns of words has been a focus of debate for many years, little is known about the lexical representations very young children use in word recognition. In particular, researchers have questioned the degree of specificity encoded in early lexical representations. The current study addressed this issue by presenting 18–23-month-olds with object labels that were either correctly pronounced, or mispronounced. Mispronunciations involved replacement of one segment with a similar segment, as in ‘baby–vaby’. Children heard sentences containing these words while viewing two pictures, one of which was the referent of the sentence. Analyses of children's eye movements showed that children recognized the spoken words in both conditions, but that recognition was significantly poorer when words were mispronounced. The effects of mispronunciation on recognition were unrelated to age or to spoken vocabulary size. The results suggest that children's representations of familiar words are phonetically well-specified, and that this specification may not be a consequence of the need to differentiate similar words in production.
  • Tanenhaus, M. K., Magnuson, J. S., Dahan, D., & Chaimbers, G. (2000). Eye movements and lexical access in spoken-language comprehension: evaluating a linking hypothesis between fixations and linguistic processing. Journal of Psycholinguistic Research, 29, 557-580. doi:10.1023/A:1026464108329.

    Abstract

    A growing number of researchers in the sentence processing community are using eye movements to address issues in spoken language comprehension. Experiments using this paradigm have shown that visually presented referential information, including properties of referents relevant to specific actions, influences even the earliest moments of syntactic processing. Methodological concerns about task-specific strategies and the linking hypothesis between eye movements and linguistic processing are identified and discussed. These concerns are addressed in a review of recent studies of spoken word recognition which introduce and evaluate a detailed linking hypothesis between eye movements and lexical access. The results provide evidence about the time course of lexical activation that resolves some important theoretical issues in spoken-word recognition. They also demonstrate that fixations are sensitive to properties of the normal language-processing system that cannot be attributed to task-specific strategies
  • Van Wijk, C., & Kempen, G. (1987). A dual system for producing self-repairs in spontaneous speech: Evidence from experimentally elicited corrections. Cognitive Psychology, 19, 403-440. doi:10.1016/0010-0285(87)90014-4.

    Abstract

    This paper presents a cognitive theory on the production and shaping of selfrepairs during speaking. In an extensive experimental study, a new technique is tried out: artificial elicitation of self-repairs. The data clearly indicate that two mechanisms for computing the shape of self-repairs should be distinguished. One is based on the repair strategy called reformulation, the second one on lemma substitution. W. Levelt’s (1983, Cognition, 14, 41- 104) well-formedness rule, which connects self-repairs to coordinate structures, is shown to apply only to reformulations. In case of lemma substitution, a totally different set of rules is at work. The linguistic unit of central importance in reformulations is the major syntactic constituent; in lemma substitutions it is a prosodic unit. the phonological phrase. A parametrization of the model yielded a very satisfactory fit between observed and reconstructed scores.
  • Van Berkum, J. J. A. (1986). De cognitieve psychologie op zoek naar grondslagen. Kennis en Methode: Tijdschrift voor wetenschapsfilosofie en methodologie, X, 348-360.
  • Van Berkum, J. J. A. (1986). Doordacht gevoel: Emoties als informatieverwerking. De Psycholoog, 21(9), 417-423.
  • Van Valin Jr., R. D. (1987). Aspects of the interaction of syntax and pragmatics: Discourse coreference mechanisms and the typology of grammatical systems. In M. Bertuccelli Papi, & J. Verschueren (Eds.), The pragmatic perspective: Selected papers from the 1985 International Pragmatics Conference (pp. 513-531). Amsterdam: Benjamins.
  • Van Valin Jr., R. D. (2000). Focus structure or abstract syntax? A role and reference grammar account of some ‘abstract’ syntactic phenomena. In Z. Estrada Fernández, & I. Barreras Aguilar (Eds.), Memorias del V Encuentro Internacional de Lingüística en el Noroeste: (2 v.) Estudios morfosintácticos (pp. 39-62). Hermosillo: Editorial Unison.
  • Van Valin Jr., R. D. (1987). Pragmatics, island phenomena, and linguistic competence. In A. M. Farley, P. T. Farley, & K.-E. McCullough (Eds.), CLS 22. Papers from the parasession on pragmatics and grammatical theory (pp. 223-233). Chicago Linguistic Society.
  • Van Berkum, J. J. A., Hagoort, P., & Brown, C. M. (2000). The use of referential context and grammatical gender in parsing: A reply to Brysbaert and Mitchell. Journal of Psycholinguistic Research, 29(5), 467-481. doi:10.1023/A:1005168025226.

    Abstract

    Based on the results of an event-related brain potentials (ERP) experiment (van Berkum, Brown, & Hagoort. 1999a, b), we have recently argued that discourse-level referential context can be taken into account extremely rapidly by the parser. Moreover, our ERP results indicated that local grammatical gender information, although available within a few hundred milliseconds from word onset, is not always used quickly enough to prevent the parser from considering a discourse-supported, but agreement-violating, syntactic analysis. In a comment on our work, Brysbaert and Mitchell (2000) have raised concerns about the methodology of our ERP experiment and have challenged our interpretation of the results. In this reply, we argue that these concerns are unwarranted and, that, in contrast to our own interpretation, the alternative explanations provided by Brysbaert and Mitchell do not account for the full pattern of ERP results.
  • Vosse, T., & Kempen, G. (2000). Syntactic structure assembly in human parsing: A computational model based on competitive inhibition and a lexicalist grammar. Cognition, 75, 105-143.

    Abstract

    We present the design, implementation and simulation results of a psycholinguistic model of human syntactic processing that meets major empirical criteria. The parser operates in conjunction with a lexicalist grammar and is driven by syntactic information associated with heads of phrases. The dynamics of the model are based on competition by lateral inhibition ('competitive inhibition'). Input words activate lexical frames (i.e. elementary trees anchored to input words) in the mental lexicon, and a network of candidate 'unification links' is set up between frame nodes. These links represent tentative attachments that are graded rather than all-or-none. Candidate links that, due to grammatical or 'treehood' constraints, are incompatible, compete for inclusion in the final syntactic tree by sending each other inhibitory signals that reduce the competitor's attachment strength. The outcome of these local and simultaneous competitions is controlled by dynamic parameters, in particular by the Entry Activation and the Activation Decay rate of syntactic nodes, and by the Strength and Strength Build-up rate of Unification links. In case of a successful parse, a single syntactic tree is returned that covers the whole input string and consists of lexical frames connected by winning Unification links. Simulations are reported of a significant range of psycholinguistic parsing phenomena in both normal and aphasic speakers of English: (i) various effects of linguistic complexity (single versus double, center versus right-hand self-embeddings of relative clauses; the difference between relative clauses with subject and object extraction; the contrast between a complement clause embedded within a relative clause versus a relative clause embedded within a complement clause); (ii) effects of local and global ambiguity, and of word-class and syntactic ambiguity (including recency and length effects); (iii) certain difficulty-of-reanalysis effects (contrasts between local ambiguities that are easy to resolve versus ones that lead to serious garden-path effects); (iv) effects of agrammatism on parsing performance, in particular the performance of various groups of aphasic patients on several sentence types.
  • Weber, A. (2000). Phonotactic and acoustic cues for word segmentation in English. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP 2000) (pp. 782-785).

    Abstract

    This study investigates the influence of both phonotactic and acoustic cues on the segmentation of spoken English. Listeners detected embedded English words in nonsense sequences (word spotting). Words aligned with phonotactic boundaries were easier to detect than words without such alignment. Acoustic cues to boundaries could also have signaled word boundaries, especially when word onsets lacked phonotactic alignment. However, only one of several durational boundary cues showed a marginally significant correlation with response times (RTs). The results suggest that word segmentation in English is influenced primarily by phonotactic constraints and only secondarily by acoustic aspects of the speech signal.
  • Weber, A. (2000). The role of phonotactics in the segmentation of native and non-native continuous speech. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP, Workshop on Spoken Word Access Processes. Nijmegen: MPI for Psycholinguistics.

    Abstract

    Previous research has shown that listeners make use of their knowledge of phonotactic constraints to segment speech into individual words. The present study investigates the influence of phonotactics when segmenting a non-native language. German and English listeners detected embedded English words in nonsense sequences. German listeners also had knowledge of English, but English listeners had no knowledge of German. Word onsets were either aligned with a syllable boundary or not, according to the phonotactics of the two languages. Words aligned with either German or English phonotactic boundaries were easier for German listeners to detect than words without such alignment. Responses of English listeners were influenced primarily by English phonotactic alignment. The results suggest that both native and non-native phonotactic constraints influence lexical segmentation of a non-native, but familiar, language.
  • Wheeldon, L. R., & Levelt, W. J. M. (1995). Monitoring the time course of phonological encoding. Journal of Memory and Language, 34(3), 311-334. doi:10.1006/jmla.1995.1014.

    Abstract

    Three experiments examined the time course of phonological encoding in speech production. A new methodology is introduced in which subjects are required to monitor their internal speech production for prespecified target segments and syllables. Experiment 1 demonstrated that word initial target segments are monitored significantly faster than second syllable initial target segments. The addition of a concurrent articulation task (Experiment 1b) had a limited effect on performance, excluding the possibility that subjects are monitoring a subvocal articulation of the carrier word. Moreover, no relationship was observed between the pattern of monitoring latencies and the timing of the targets in subjects′ overt speech. Subjects are not, therefore, monitoring an internal phonetic representation of the carrier word. Experiment 2 used the production monitoring task to replicate the syllable monitoring effect observed in speech perception experiments: responses to targets were faster when they corresponded to the initial syllable of the carrier word than when they did not. We conclude that subjects are monitoring their internal generation of a syllabified phonological representation. Experiment 3 provides more detailed evidence concerning the time course of the generation of this representation by comparing monitoring latencies to targets within, as well as between, syllables. Some amendments to current models of phonological encoding are suggested in light of these results.
  • Wilkins, D. P., & Hill, D. (1995). When "go" means "come": Questioning the basicness of basic motion verbs. Cognitive Linguistics, 6, 209-260. doi:10.1515/cogl.1995.6.2-3.209.

    Abstract

    The purpose of this paper is to question some of the basic assumpiions concerning motion verbs. In particular, it examines the assumption that "come" and "go" are lexical universals which manifest a universal deictic Opposition. Against the background offive working hypotheses about the nature of'come" and ''go", this study presents a comparative investigation of t wo unrelated languages—Mparntwe Arrernte (Pama-Nyungan, Australian) and Longgu (Oceanic, Austronesian). Although the pragmatic and deictic "suppositional" complexity of"come" and "go" expressions has long been recognized, we argue that in any given language the analysis of these expressions is much more semantically and systemically complex than has been assumed in the literature. Languages vary at the lexical semantic level äs t o what is entailed by these expressions, äs well äs differing äs t o what constitutes the prototype and categorial structure for such expressions. The data also strongly suggest that, ifthere is a lexical universal "go", then this cannof be an inherently deictic expression. However, due to systemic Opposition with "come", non-deictic "go" expressions often take on a deictic Interpretation through pragmatic attribution. Thus, this crosslinguistic investigation of "come" and "go" highlights the need to consider semantics and pragmatics äs modularly separate.

Share this page