Publications

Displaying 101 - 119 of 119
  • Senft, G. (2000). [Review of the book Language, identity, and marginality in Indonesia: The changing nature of ritual speech on the island of Sumba by Joel C. Kuipers]. Linguistics, 38, 435-441. doi:10.1515/ling.38.2.435.
  • Senft, G., & Smits, R. (Eds.). (2000). Max-Planck-Institute for Psycholinguistics: Annual report 2000. Nijmegen: MPI for Psycholinguistics.
  • Senft, G. (2000). Introduction. In G. Senft (Ed.), Systems of nominal classification (pp. 1-10). Cambridge University Press.
  • Senft, G. (Ed.). (2000). Systems of nominal classification. Cambridge: Cambridge University Press.
  • Senft, G. (2000). What do we really know about nominal classification systems? In Conference handbook. The 18th national conference of the English Linguistic Society of Japan. 18-19 November, 2000, Konan University (pp. 225-230). Kobe: English Linguistic Society of Japan.
  • Senft, G. (2000). What do we really know about nominal classification systems? In G. Senft (Ed.), Systems of nominal classification (pp. 11-49). Cambridge University Press.
  • Seuren, P. A. M. (2000). Bewustzijn en taal. Splijtstof, 28(4), 111-123.
  • Seuren, P. A. M. (2000). A discourse-semantic account of topic and comment. In N. Nicolov, & R. Mitkov (Eds.), Recent advances in natural language processing II. Selected papers from RANLP '97 (pp. 179-190). Amsterdam: Benjamins.
  • Seuren, P. A. M. (2000). Presupposition, negation and trivalence. Journal of Linguistics, 36(2), 261-297.
  • Seuren, P. A. M. (2000). Pseudocomplementen. In H. Den Besten, E. Elffers, & J. Luif (Eds.), Samengevoegde woorden. Voor Wim Klooster bij zijn afscheid als hoogleraar (pp. 231-237). Amsterdam: Leerstoelgroep Nederlandse Taalkunde, Universiteit van Amsterdam.
  • Smits, R. (2000). Temporal distribution of information for human consonant recognition in VCV utterances. Journal of Phonetics, 28, 111-135. doi:10.006/jpho.2000.0107.

    Abstract

    The temporal distribution of perceptually relevant information for consonant recognition in British English VCVs is investigated. The information distribution in the vicinity of consonantal closure and release was measured by presenting initial and final portions, respectively, of naturally produced VCV utterances to listeners for categorization. A multidimensional scaling analysis of the results provided highly interpretable, four-dimensional geometrical representations of the confusion patterns in the categorization data. In addition, transmitted information as a function of truncation point was calculated for the features manner place and voicing. The effects of speaker, vowel context, stress, and distinctive feature on the resulting information distributions were tested statistically. It was found that, although all factors are significant, the location and spread of the distributions depends principally on the distinctive feature, i.e., the temporal distribution of perceptually relevant information is very different for the features manner, place, and voicing.
  • Swingley, D., & Aslin, R. N. (2000). Spoken word recognition and lexical representation in very young children. Cognition, 76, 147-166. doi:10.1016/S0010-0277(00)00081-0.

    Abstract

    Although children's knowledge of the sound patterns of words has been a focus of debate for many years, little is known about the lexical representations very young children use in word recognition. In particular, researchers have questioned the degree of specificity encoded in early lexical representations. The current study addressed this issue by presenting 18–23-month-olds with object labels that were either correctly pronounced, or mispronounced. Mispronunciations involved replacement of one segment with a similar segment, as in ‘baby–vaby’. Children heard sentences containing these words while viewing two pictures, one of which was the referent of the sentence. Analyses of children's eye movements showed that children recognized the spoken words in both conditions, but that recognition was significantly poorer when words were mispronounced. The effects of mispronunciation on recognition were unrelated to age or to spoken vocabulary size. The results suggest that children's representations of familiar words are phonetically well-specified, and that this specification may not be a consequence of the need to differentiate similar words in production.
  • Tanenhaus, M. K., Magnuson, J. S., Dahan, D., & Chaimbers, G. (2000). Eye movements and lexical access in spoken-language comprehension: evaluating a linking hypothesis between fixations and linguistic processing. Journal of Psycholinguistic Research, 29, 557-580. doi:10.1023/A:1026464108329.

    Abstract

    A growing number of researchers in the sentence processing community are using eye movements to address issues in spoken language comprehension. Experiments using this paradigm have shown that visually presented referential information, including properties of referents relevant to specific actions, influences even the earliest moments of syntactic processing. Methodological concerns about task-specific strategies and the linking hypothesis between eye movements and linguistic processing are identified and discussed. These concerns are addressed in a review of recent studies of spoken word recognition which introduce and evaluate a detailed linking hypothesis between eye movements and lexical access. The results provide evidence about the time course of lexical activation that resolves some important theoretical issues in spoken-word recognition. They also demonstrate that fixations are sensitive to properties of the normal language-processing system that cannot be attributed to task-specific strategies
  • Van Valin Jr., R. D. (2000). Focus structure or abstract syntax? A role and reference grammar account of some ‘abstract’ syntactic phenomena. In Z. Estrada Fernández, & I. Barreras Aguilar (Eds.), Memorias del V Encuentro Internacional de Lingüística en el Noroeste: (2 v.) Estudios morfosintácticos (pp. 39-62). Hermosillo: Editorial Unison.
  • Van Berkum, J. J. A., Hagoort, P., & Brown, C. M. (2000). The use of referential context and grammatical gender in parsing: A reply to Brysbaert and Mitchell. Journal of Psycholinguistic Research, 29(5), 467-481. doi:10.1023/A:1005168025226.

    Abstract

    Based on the results of an event-related brain potentials (ERP) experiment (van Berkum, Brown, & Hagoort. 1999a, b), we have recently argued that discourse-level referential context can be taken into account extremely rapidly by the parser. Moreover, our ERP results indicated that local grammatical gender information, although available within a few hundred milliseconds from word onset, is not always used quickly enough to prevent the parser from considering a discourse-supported, but agreement-violating, syntactic analysis. In a comment on our work, Brysbaert and Mitchell (2000) have raised concerns about the methodology of our ERP experiment and have challenged our interpretation of the results. In this reply, we argue that these concerns are unwarranted and, that, in contrast to our own interpretation, the alternative explanations provided by Brysbaert and Mitchell do not account for the full pattern of ERP results.
  • Vosse, T., & Kempen, G. (2000). Syntactic structure assembly in human parsing: A computational model based on competitive inhibition and a lexicalist grammar. Cognition, 75, 105-143.

    Abstract

    We present the design, implementation and simulation results of a psycholinguistic model of human syntactic processing that meets major empirical criteria. The parser operates in conjunction with a lexicalist grammar and is driven by syntactic information associated with heads of phrases. The dynamics of the model are based on competition by lateral inhibition ('competitive inhibition'). Input words activate lexical frames (i.e. elementary trees anchored to input words) in the mental lexicon, and a network of candidate 'unification links' is set up between frame nodes. These links represent tentative attachments that are graded rather than all-or-none. Candidate links that, due to grammatical or 'treehood' constraints, are incompatible, compete for inclusion in the final syntactic tree by sending each other inhibitory signals that reduce the competitor's attachment strength. The outcome of these local and simultaneous competitions is controlled by dynamic parameters, in particular by the Entry Activation and the Activation Decay rate of syntactic nodes, and by the Strength and Strength Build-up rate of Unification links. In case of a successful parse, a single syntactic tree is returned that covers the whole input string and consists of lexical frames connected by winning Unification links. Simulations are reported of a significant range of psycholinguistic parsing phenomena in both normal and aphasic speakers of English: (i) various effects of linguistic complexity (single versus double, center versus right-hand self-embeddings of relative clauses; the difference between relative clauses with subject and object extraction; the contrast between a complement clause embedded within a relative clause versus a relative clause embedded within a complement clause); (ii) effects of local and global ambiguity, and of word-class and syntactic ambiguity (including recency and length effects); (iii) certain difficulty-of-reanalysis effects (contrasts between local ambiguities that are easy to resolve versus ones that lead to serious garden-path effects); (iv) effects of agrammatism on parsing performance, in particular the performance of various groups of aphasic patients on several sentence types.
  • Weber, A. (2000). Phonotactic and acoustic cues for word segmentation in English. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP 2000) (pp. 782-785).

    Abstract

    This study investigates the influence of both phonotactic and acoustic cues on the segmentation of spoken English. Listeners detected embedded English words in nonsense sequences (word spotting). Words aligned with phonotactic boundaries were easier to detect than words without such alignment. Acoustic cues to boundaries could also have signaled word boundaries, especially when word onsets lacked phonotactic alignment. However, only one of several durational boundary cues showed a marginally significant correlation with response times (RTs). The results suggest that word segmentation in English is influenced primarily by phonotactic constraints and only secondarily by acoustic aspects of the speech signal.
  • Weber, A. (2000). The role of phonotactics in the segmentation of native and non-native continuous speech. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP, Workshop on Spoken Word Access Processes. Nijmegen: MPI for Psycholinguistics.

    Abstract

    Previous research has shown that listeners make use of their knowledge of phonotactic constraints to segment speech into individual words. The present study investigates the influence of phonotactics when segmenting a non-native language. German and English listeners detected embedded English words in nonsense sequences. German listeners also had knowledge of English, but English listeners had no knowledge of German. Word onsets were either aligned with a syllable boundary or not, according to the phonotactics of the two languages. Words aligned with either German or English phonotactic boundaries were easier for German listeners to detect than words without such alignment. Responses of English listeners were influenced primarily by English phonotactic alignment. The results suggest that both native and non-native phonotactic constraints influence lexical segmentation of a non-native, but familiar, language.
  • Zavala, R. (2000). Multiple classifier systems in Akatek (Mayan). In G. Senft (Ed.), Systems of nominal classification (pp. 114-146). Cambridge University Press.

Share this page