Publications

Displaying 101 - 133 of 133
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2003). Determinants of acquisition order in wh-questions: Re-evaluating the role of caregiver speech. Journal of Child Language, 30(3), 609-635. doi:10.1017/S0305000903005695.

    Abstract

    Accounts that specify semantic and/or syntactic complexity as the primary determinant of the order in which children acquire particular words or grammatical constructions have been highly influential in the literature on question acquisition. One explanation of wh-question acquisition in particular suggests that the order in which English speaking children acquire wh-questions is determined by two interlocking linguistic factors; the syntactic function of the wh-word that heads the question and the semantic generality (or ‘lightness’) of the main verb (Bloom, Merkin & Wootten, 1982; Bloom, 1991). Another more recent view, however, is that acquisition is influenced by the relative frequency with which children hear particular wh-words and verbs in their input (e.g. Rowland & Pine, 2000). In the present study over 300 hours of naturalistic data from twelve two- to three-year-old children and their mothers were analysed in order to assess the relative contribution of complexity and input frequency to wh-question acquisition. The analyses revealed, first, that the acquisition order of wh-questions could be predicted successfully from the frequency with which particular wh-words and verbs occurred in the children's input and, second, that syntactic and semantic complexity did not reliably predict acquisition once input frequency was taken into account. These results suggest that the relationship between acquisition and complexity may be a by-product of the high correlation between complexity and the frequency with which mothers use particular wh-words and verbs. We interpret the results in terms of a constructivist view of language acquisition.
  • Rowland, C. F., & Pine, J. M. (2003). The development of inversion in wh-questions: a reply to Van Valin. Journal of Child Language, 30(1), 197-212. doi:10.1017/S0305000902005445.

    Abstract

    Van Valin (Journal of Child Language29, 2002, 161–75) presents a critique of Rowland & Pine (Journal of Child Language27, 2000, 157–81) and argues that the wh-question data from Adam (in Brown, A first language, Cambridge, MA, 1973) cannot be explained in terms of input frequencies as we suggest. Instead, he suggests that the data can be more successfully accounted for in terms of Role and Reference Grammar. In this note we re-examine the pattern of inversion and uninversion in Adam's wh-questions and argue that the RRG explanation cannot account for some of the developmental facts it was designed to explain.
  • De Ruiter, J. P., Rossignol, S., Vuurpijl, L., Cunningham, D. W., & Levelt, W. J. M. (2003). SLOT: A research platform for investigating multimodal communication. Behavior Research Methods, Instruments, & Computers, 35(3), 408-419.

    Abstract

    In this article, we present the spatial logistics task (SLOT) platform for investigating multimodal communication between 2 human participants. Presented are the SLOT communication task and the software and hardware that has been developed to run SLOT experiments and record the participants’ multimodal behavior. SLOT offers a high level of flexibility in varying the context of the communication and is particularly useful in studies of the relationship between pen gestures and speech. We illustrate the use of the SLOT platform by discussing the results of some early experiments. The first is an experiment on negotiation with a one-way mirror between the participants, and the second is an exploratory study of automatic recognition of spontaneous pen gestures. The results of these studies demonstrate the usefulness of the SLOT platform for conducting multimodal communication research in both human– human and human–computer interactions.
  • Salverda, A. P., Dahan, D., & McQueen, J. M. (2003). The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension. Cognition, 90(1), 51-89. doi:10.1016/S0010-0277(03)00139-2.

    Abstract

    Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process.
  • Scharenborg, O., ten Bosch, L., Boves, L., & Norris, D. (2003). Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition [Letter to the editor]. Journal of the Acoustical Society of America, 114, 3032-3035. doi:10.1121/1.1624065.

    Abstract

    This letter evaluates potential benefits of combining human speech recognition ~HSR! and automatic speech recognition by building a joint model of an automatic phone recognizer ~APR! and a computational model of HSR, viz., Shortlist @Norris, Cognition 52, 189–234 ~1994!#. Experiments based on ‘‘real-life’’ speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2003). ‘Early recognition’ of words in continuous speech. Automatic Speech Recognition and Understanding, 2003 IEEE Workshop, 61-66. doi:10.1109/ASRU.2003.1318404.

    Abstract

    In this paper, we present an automatic speech recognition (ASR) system based on the combination of an automatic phone recogniser and a computational model of human speech recognition – SpeM – that is capable of computing ‘word activations’ during the recognition process, in addition to doing normal speech recognition, a task in which conventional ASR architectures only provide output after the end of an utterance. We explain the notion of word activation and show that it can be used for ‘early recognition’, i.e. recognising a word before the end of the word is available. Our ASR system was tested on 992 continuous speech utterances, each containing at least one target word: a city name of at least two syllables. The results show that early recognition was obtained for 72.8% of the target words that were recognised correctly. Also, it is shown that word activation can be used as an effective confidence measure.
  • Schiller, N. O., Münte, T. F., Horemans, I., & Jansma, B. M. (2003). The influence of semantic and phonological factors on syntactic decisions: An event-related brain potential study. Psychophysiology, 40(6), 869-877. doi:10.1111/1469-8986.00105.

    Abstract

    During language production and comprehension, information about a word's syntactic properties is sometimes needed. While the decision about the grammatical gender of a word requires access to syntactic knowledge, it has also been hypothesized that semantic (i.e., biological gender) or phonological information (i.e., sound regularities) may influence this decision. Event-related potentials (ERPs) were measured while native speakers of German processed written words that were or were not semantically and/or phonologically marked for gender. Behavioral and ERP results showed that participants were faster in making a gender decision when words were semantically and/or phonologically gender marked than when this was not the case, although the phonological effects were less clear. In conclusion, our data provide evidence that even though participants performed a grammatical gender decision, this task can be influenced by semantic and phonological factors.
  • Schiller, N. O., Bles, M., & Jansma, B. M. (2003). Tracking the time course of phonological encoding in speech production: An event-related brain potential study on internal monitoring. Cognitive Brain Research, 17(3), 819-831. doi:10.1016/S0926-6410(03)00204-0.

    Abstract

    This study investigated the time course of phonological encoding during speech production planning. Previous research has shown that conceptual/semantic information precedes syntactic information in the planning of speech production and that syntactic information is available earlier than phonological information. Here, we studied the relative time courses of the two different processes within phonological encoding, i.e. metrical encoding and syllabification. According to one prominent theory of language production, metrical encoding involves the retrieval of the stress pattern of a word, while syllabification is carried out to construct the syllabic structure of a word. However, the relative timing of these two processes is underspecified in the theory. We employed an implicit picture naming task and recorded event-related brain potentials to obtain fine-grained temporal information about metrical encoding and syllabification. Results revealed that both tasks generated effects that fall within the time window of phonological encoding. However, there was no timing difference between the two effects, suggesting that they occur approximately at the same time.
  • Schiller, N. O., & Caramazza, A. (2003). Grammatical feature selection in noun phrase production: Evidence from German and Dutch. Journal of Memory and Language, 48(1), 169-194. doi:10.1016/S0749-596X(02)00508-9.

    Abstract

    In this study, we investigated grammatical feature selection during noun phrase production in German and Dutch. More specifically, we studied the conditions under which different grammatical genders select either the same or different determiners or suffixes. Pictures of one or two objects paired with a gender-congruent or a gender-incongruent distractor word were presented. Participants named the pictures using a singular or plural noun phrase with the appropriate determiner and/or adjective in German or Dutch. Significant effects of gender congruency were only obtained in the singular condition where the selection of determiners is governed by the target’s gender, but not in the plural condition where the determiner is identical for all genders. When different suffixes were to be selected in the gender-incongruent condition, no gender congruency effect was obtained. The results suggest that the so-called gender congruency effect is really a determiner congruency effect. The overall pattern of results is interpreted as indicating that grammatical feature selection is an automatic consequence of lexical node selection and therefore not subject to interference from other grammatical features. This implies that lexical node and grammatical feature selection operate with distinct principles.
  • Seifart, F. (2003). Marqueurs de classe généraux et spécifiques en Miraña. Faits de Langues, 21, 121-132.
  • Senft, G. (2003). [Review of the book Representing space in Oceania: Culture in language and mind ed. by Giovanni Bennardo]. Journal of the Polynesian Society, 112, 169-171.
  • Seuren, P. A. M. (1979). [Review of the book Approaches to natural language ed. by K. Hintikka, J. Moravcsik and P. Suppes]. Leuvense Bijdragen, 68, 163-168.
  • Seuren, P. A. M. (1979). Meer over minder dan hoeft. De Nieuwe Taalgids, 72(3), 236-239.
  • Seuren, P. A. M. (1989). Neue Entwicklungen im Wahrheitsbegriff. Studia Leibnitiana, 21(2), 155-173.
  • Smith, M. R., Cutler, A., Butterfield, S., & Nimmo-Smith, I. (1989). The perception of rhythm and word boundaries in noise-masked speech. Journal of Speech and Hearing Research, 32, 912-920.

    Abstract

    The present experiment tested the suggestion that human listeners may exploit durational information in speech to parse continuous utterances into words. Listeners were presented with six-syllable unpredictable utterances under noise-masking, and were required to judge between alternative word strings as to which best matched the rhythm of the masked utterances. For each utterance there were four alternative strings: (a) an exact rhythmic and word boundary match, (b) a rhythmic mismatch, and (c) two utterances with the same rhythm as the masked utterance, but different word boundary locations. Listeners were clearly able to perceive the rhythm of the masked utterances: The rhythmic mismatch was chosen significantly less often than any other alternative. Within the three rhythmically matched alternatives, the exact match was chosen significantly more often than either word boundary mismatch. Thus, listeners both perceived speech rhythm and used durational cues effectively to locate the position of word boundaries.
  • Smits, R., Warner, N., McQueen, J. M., & Cutler, A. (2003). Unfolding of phonetic information over time: A database of Dutch diphone perception. Journal of the Acoustical Society of America, 113(1), 563-574. doi:10.1121/1.1525287.

    Abstract

    We present the results of a large-scale study on speech perception, assessing the number and type of perceptual hypotheses which listeners entertain about possible phoneme sequences in their language. Dutch listeners were asked to identify gated fragments of all 1179 diphones of Dutch, providing a total of 488 520 phoneme categorizations. The results manifest orderly uptake of acoustic information in the signal. Differences across phonemes in the rate at which fully correct recognition was achieved arose as a result of whether or not potential confusions could occur with other phonemes of the language ~long with short vowels, affricates with their initial components, etc.!. These data can be used to improve models of how acoustic phonetic information is mapped onto the mental lexicon during speech comprehension.
  • Spinelli, E., McQueen, J. M., & Cutler, A. (2003). Processing resyllabified words in French. Journal of Memory and Language, 48(2), 233-254. doi:10.1016/S0749-596X(02)00513-2.
  • Stivers, T., Mangione-Smith, R., Elliott, M. N., McDonald, L., & Heritage, J. (2003). Why do physicians think parents expect antibiotics? What parents report vs what physicians believe. Journal of Family Practice, 52(2), 140-147.
  • Swaab, T., Brown, C. M., & Hagoort, P. (2003). Understanding words in sentence contexts: The time course of ambiguity resolution. Brain and Language, 86(2), 326-343. doi:10.1016/S0093-934X(02)00547-3.

    Abstract

    Spoken language comprehension requires rapid integration of information from multiple linguistic sources. In the present study we addressed the temporal aspects of this integration process by focusing on the time course of the selection of the appropriate meaning of lexical ambiguities (“bank”) in sentence contexts. Successful selection of the contextually appropriate meaning of the ambiguous word is dependent upon the rapid binding of the contextual information in the sentence to the appropriate meaning of the ambiguity. We used the N400 to identify the time course of this binding process. The N400 was measured to target words that followed three types of context sentences. In the concordant context, the sentence biased the meaning of the sentence-final ambiguous word so that it was related to the target. In the discordant context, the sentence context biased the meaning so that it was not related to the target. In the unrelated control condition, the sentences ended in an unambiguous noun that was unrelated to the target. Half of the concordant sentences biased the dominant meaning, and the other half biased the subordinate meaning of the sentence-final ambiguous words. The ISI between onset of the target word and offset of the sentence-final word of the context sentence was 100 ms in one version of the experiment, and 1250 ms in the second version. We found that (i) the lexically dominant meaning is always partly activated, independent of context, (ii) initially both dominant and subordinate meaning are (partly) activated, which suggests that contextual and lexical factors both contribute to sentence interpretation without context completely overriding lexical information, and (iii) strong lexical influences remain present for a relatively long period of time.
  • Swingley, D. (2003). Phonetic detail in the developing lexicon. Language and Speech, 46(3), 265-294.

    Abstract

    Although infants show remarkable sensitivity to linguistically relevant phonetic variation in speech, young children sometimes appear not to make use of this sensitivity. Here, children's knowledge of the sound-forms of familiar words was assessed using a visual fixation task. Dutch 19-month-olds were shown pairs of pictures and heard correct pronunciations and mispronunciations of familiar words naming one of the pictures. Mispronunciations were word-initial in Experiment 1 and word-medial in Experiment 2, and in both experiments involved substituting one segment with [d] (a common sound in Dutch) or [g] (a rare sound). In both experiments, word recognition performance was better for correct pronunciations than for mispronunciations involving either substituted consonant. These effects did not depend upon children's knowledge of lexical or nonlexical phonological neighbors of the tested words. The results indicate the encoding of phonetic detail in words at 19 months.
  • Swinney, D. A., & Cutler, A. (1979). The access and processing of idiomatic expressions. Journal of Verbal Learning an Verbal Behavior, 18, 523-534. doi:10.1016/S0022-5371(79)90284-6.

    Abstract

    Two experiments examined the nature of access, storage, and comprehension of idiomatic phrases. In both studies a Phrase Classification Task was utilized. In this, reaction times to determine whether or not word strings constituted acceptable English phrases were measured. Classification times were significantly faster to idiom than to matched control phrases. This effect held under conditions involving different categories of idioms, different transitional probabilities among words in the phrases, and different levels of awareness of the presence of idioms in the materials. The data support a Lexical Representation Hypothesis for the processing of idioms.
  • Terrill, A., & Dunn, M. (2003). Orthographic design in the Solomon Islands: The social, historical, and linguistic situation of Touo (Baniata). Written Language and Literacy, 6(2), 177-192. doi:10.1075/wll.6.2.03ter.

    Abstract

    This paper discusses the development of an orthography for the Touo language (Solomon Islands). Various orthographies have been proposed for this language in the past, and the paper discusses why they are perceived by the community to have failed. Current opinion about orthography development within the Touo-speaking community is divided along religious, political, and geographical grounds; and the development of a successful orthography must take into account a variety of opinions. The paper examines the social, historical, and linguistic obstacles that have hitherto prevented the development of an accepted Touo orthography, and presents a new proposal which has thus far gained acceptance with community leaders. The fundamental issue is that creating an orthography for a language takes place in a social, political, and historical context; and for an orthography to be acceptable for the speakers of a language, all these factors must be taken into account.
  • Terrill, A. (2003). Linguistic stratigraphy in the central Solomon Islands: Lexical evidence of early Papuan/Austronesian interaction. Journal of the Polynesian Society, 112(4), 369-401.

    Abstract

    The extent to which linguistic borrowing can be used to shed light on the existence and nature of early contact between Papuan and Oceanic speakers is examined. The question is addressed by taking one Papuan language, Lavukaleve, spoken in the Russell Islands, central Solomon Islands and examining lexical borrowings between it and nearby Oceanic languages, and with reconstructed forms of Proto Oceanic. Evidence from ethnography, culture history and archaeology, when added to the linguistic evidence provided in this study, indicates long-standing cultural links between other (non-Russell) islands. The composite picture is one of a high degree of cultural contact with little linguistic mixing, i.e., little or no changes affecting the structure of the languages and actually very little borrowed vocabulary.
  • Van Turennout, M., Bielamowicz, L., & Martin, A. (2003). Modulation of neural activity during object naming: Effects of time and practice. Cerebral Cortex, 13(4), 381-391.

    Abstract

    Repeated exposure to objects improves our ability to identify and name them, even after a long delay. Previous brain imaging studies have demonstrated that this experience-related facilitation of object naming is associated with neural changes in distinct brain regions. We used event-related functional magnetic resonance imaging (fMRI) to examine the modulation of neural activity in the object naming system as a function of experience and time. Pictures of common objects were presented repeatedly for naming at different time intervals (1 h, 6 h and 3 days) before scanning, or at 30 s intervals during scanning. The results revealed that as objects became more familiar with experience, activity in occipitotemporal and left inferior frontal regions decreased while activity in the left insula and basal ganglia increased. In posterior regions, reductions in activity as a result of multiple repetitions did not interact with time, whereas in left inferior frontal cortex larger decreases were observed when repetitions were spaced out over time. This differential modulation of activity in distinct brain regions provides support for the idea that long-lasting object priming is mediated by two neural mechanisms. The first mechanism may involve changes in object-specific representations in occipitotemporal cortices, the second may be a form of procedural learning involving a reorganization in brain circuitry that leads to more efficient name retrieval.
  • Van Berkum, J. J. A., Zwitserlood, P., Hagoort, P., & Brown, C. M. (2003). When and how do listeners relate a sentence to the wider discourse? Evidence from the N400 effect. Cognitive Brain Research, 17(3), 701-718. doi:10.1016/S0926-6410(03)00196-4.

    Abstract

    In two ERP experiments, we assessed the impact of discourse-level information on the processing of an unfolding spoken sentence. Subjects listened to sentences like Jane told her brother that he was exceptionally quick/slow, designed such that the alternative critical words were equally acceptable within the local sentence context. In Experiment 1, these sentences were embedded in a discourse that rendered one of the critical words anomalous (e.g. because Jane’s brother had in fact done something very quickly). Relative to the coherent alternative, these discourse-anomalous words elicited a standard N400 effect that started at 150–200 ms after acoustic word onset. Furthermore, when the same sentences were heard in isolation in Experiment 2, the N400 effect disappeared. The results demonstrate that our listeners related the unfolding spoken words to the wider discourse extremely rapidly, after having heard the first two or three phonemes only, and in many cases well before the end of the word. In addition, the identical nature of discourse- and sentence-dependent N400 effects suggests that from the perspective of the word-elicited comprehension process indexed by the N400, the interpretive context delineated by a single unfolding sentence and a larger discourse is functionally identical.
  • Van Berkum, J. J. A., Brown, C. M., Hagoort, P., & Zwitserlood, P. (2003). Event-related brain potentials reflect discourse-referential ambiguity in spoken language comprehension. Psychophysiology, 40(2), 235-248. doi:10.1111/1469-8986.00025.

    Abstract

    In two experiments, we explored the use of event-related brain potentials to selectively track the processes that establish reference during spoken language comprehension. Subjects listened to stories in which a particular noun phrase like "the girl" either uniquely referred to a single referent mentioned in the earlier discourse, or ambiguously referred to two equally suitable referents. Referentially ambiguous nouns ("the girl" with two girls introduced in the discourse context) elicited a frontally dominant and sustained negative shift in brain potentials, emerging within 300–400 ms after acoustic noun onset. The early onset of this effect reveals that reference to a discourse entity can be established very rapidly. Its morphology and distribution suggest that at least some of the processing consequences of referential ambiguity may involve an increased demand on memory resources. Furthermore, because this referentially induced ERP effect is very different from that of well-known ERP effects associated with the semantic (N400) and syntactic (e.g., P600/SPS) aspects of language comprehension, it suggests that ERPs can be used to selectively keep track of three major processes involved in the comprehension of an unfolding piece of discourse.
  • Van Gompel, R. P., & Majid, A. (2003). Antecedent frequency effects during the processing of pronouns. Cognition, 90(3), 255-264. doi:10.1016/S0010-0277(03)00161-6.

    Abstract

    An eye-movement reading experiment investigated whether the ease with which pronouns are processed is affected by the lexical frequency of their antecedent. Reading times following pronouns with infrequent antecedents were faster than following pronouns with frequent antecedents. We argue that this is consistent with a saliency account, according to which infrequent antecedents are more salient than frequent antecedents. The results are not predicted by accounts which claim that readers access all or part of the lexical properties of the antecedent during the processing of pronouns.
  • Verhoeven, L., Schreuder, R., & Baayen, R. H. (2003). Units of analysis in reading Dutch bisyllabic pseudowords. Scientific Studies of Reading, 7(3), 255-271. doi:10.1207/S1532799XSSR0703_4.

    Abstract

    Two experiments were carried out to explore the units of analysis is used by children to read Dutch bisyllabic pseudowords. Although Dutch orthography is highly regular, several deviations from a one-to-one correspondence occur. In polysyllabic words, the grapheme e may represent three different vowels:/∊/, /e/, or /λ/. In Experiment 1, Grade 6 elementary school children were presented lists of bisyllabic pseudowords containing the grapheme e in the initial syllable representing a content morpheme, a prefix, or a random string. On the basis of general word frequency data, we expected the interpretation of the initial syllable as a random string to elicit the pronunciation of a stressed /e/, the interpretation of the initial syllable as a content morpheme to elicit the pronunciation of a stressed /∊/, the interpretation of the initial syllable as a content morpheme to elicit the pronunciation of a stressed /∊/, and the interpretation as a prefix to elicit the pronunciation of an unstressed /&lamda;/. We found both the pronunciation and the stress assignment for pseudowords to depend on word type, which shows morpheme boundaries and prefixes to be identified. However, the identification of prefixes could also be explained by the correspondence of the prefix boundaries in the pseudowords to syllable boundaries. To exclude this alternative explanation, a follow-up experiment with the same group of children was conducted using bisyllabic pseudowords containing prefixes that did not coincide with syllable boundaries versus similar pseudowords with no prefix. The results of the first experiment were replicated. That is, the children identified prefixes and shifted their assignment of word stress accordingly. The results are discussed with reference to a parallel dual-route model of word decoding
  • Waller, D., & Haun, D. B. M. (2003). Scaling techniques for modeling directional knowledge. Behavior Research Methods, Instruments, & Computers, 35(2), 285-293.

    Abstract

    A common way for researchers to model or graphically portray spatial knowledge of a large environment is by applying multidimensional scaling (MDS) to a set of pairwise distance estimations. We introduce two MDS-like techniques that incorporate people’s knowledge of directions instead of (or in addition to) their knowledge of distances. Maps of a familiar environment derived from these procedures were more accurate and were rated by participants as being more accurate than those derived from nonmetric MDS. By incorporating people’s relatively accurate knowledge of directions, these methods offer spatial cognition researchers and behavioral geographers a sharper analytical tool than MDS for studying cognitive maps.
  • Weber, A., & Cutler, A. (2003). Perceptual similarity co-existing with lexical dissimilarity [Abstract]. Abstracts of the 146th Meeting of the Acoustical Society of America. Journal of the Acoustical Society of America, 114(4 Pt. 2), 2422. doi:10.1121/1.1601094.

    Abstract

    The extreme case of perceptual similarity is indiscriminability, as when two second‐language phonemes map to a single native category. An example is the English had‐head vowel contrast for Dutch listeners; Dutch has just one such central vowel, transcribed [E]. We examine whether the failure to discriminate in phonetic categorization implies indiscriminability in other—e.g., lexical—processing. Eyetracking experiments show that Dutch‐native listeners instructed in English to ‘‘click on the panda’’ look (significantly more than native listeners) at a pictured pencil, suggesting that pan‐ activates their lexical representation of pencil. The reverse, however, is not the case: ‘‘click on the pencil’’ does not induce looks to a panda, suggesting that pen‐ does not activate panda in the lexicon. Thus prelexically undiscriminated second‐language distinctions can nevertheless be maintained in stored lexical representations. The problem of mapping a resulting unitary input to two distinct categories in lexical representations is solved by allowing input to activate only one second‐language category. For Dutch listeners to English, this is English [E], as a result of which no vowels in the signal ever map to words containing [ae]. We suggest that the choice of category is here motivated by a more abstract, phonemic, metric of similarity.
  • Wheeldon, L. (2003). Inhibitory from priming of spoken word production. Language and Cognitive Processes, 18(1), 81-109. doi:10.1080/01690960143000470.

    Abstract

    Three experiments were designed to examine the effect on picture naming of the prior production of a word related in phonological form. In Experiment 1, the latency to produce Dutch words in response to pictures (e.g., hoed , hat) was longer following the production of a form-related word (e.g., hond , dog) in response to a definition on a preceding trial, than when the preceding definition elicited an unrelated word (e.g., kerk , church). Experiment 2 demonstrated that the inhibitory effect disappears when one unrelated word is produced intervening prime and target productions (e.g., hond-kerk-hoed ). The size of the inhibitory effect was not significantly affected by the frequency of the prime words or the target picture names. In Experiment 3, facilitation was observed for word pairs that shared offset segments (e.g., kurk-jurk , cork-dress), whereas inhibition was observed for shared onset segments (e.g., bloed-bloem , blood-flower). However, no priming was observed for prime and target words with shared phonemes but no mismatching segments (e.g., oom-boom , uncle-tree; hex-hexs , fence-witch). These findings are consistent with a process of phoneme competition during phonological encoding.
  • Wittenburg, P. (2003). The DOBES model of language documentation. Language Documentation and Description, 1, 122-139.
  • Zeshan, U. (2003). Aspects of Türk Işaret Dili (Turkish Sign Language). Sign Language and Linguistics, 6(1), 43-75. doi:10.1075/sll.6.1.04zes.

    Abstract

    This article provides a first overview of some striking grammatical structures in Türk Idotscedilaret Dili (Turkish Sign Language, TID), the sign language used by the Deaf community in Turkey. The data are described with a typological perspective in mind, focusing on aspects of TID grammar that are typologically unusual across sign languages. After giving an overview of the historical, sociolinguistic and educational background of TID and the language community using this sign language, five domains of TID grammar are investigated in detail. These include a movement derivation signalling completive aspect, three types of nonmanual negation — headshake, backward head tilt, and puffed cheeks — and their distribution, cliticization of the negator NOT to a preceding predicate host sign, an honorific whole-entity classifier used to refer to humans, and a question particle, its history and current status in the language. A final evaluation points out the significance of these data for sign language research and looks at perspectives for a deeper understanding of the language and its history.

Share this page