Publications

Displaying 701 - 734 of 734
  • Verhoeven, L., Schreuder, R., & Baayen, R. H. (2003). Units of analysis in reading Dutch bisyllabic pseudowords. Scientific Studies of Reading, 7(3), 255-271. doi:10.1207/S1532799XSSR0703_4.

    Abstract

    Two experiments were carried out to explore the units of analysis is used by children to read Dutch bisyllabic pseudowords. Although Dutch orthography is highly regular, several deviations from a one-to-one correspondence occur. In polysyllabic words, the grapheme e may represent three different vowels:/∊/, /e/, or /λ/. In Experiment 1, Grade 6 elementary school children were presented lists of bisyllabic pseudowords containing the grapheme e in the initial syllable representing a content morpheme, a prefix, or a random string. On the basis of general word frequency data, we expected the interpretation of the initial syllable as a random string to elicit the pronunciation of a stressed /e/, the interpretation of the initial syllable as a content morpheme to elicit the pronunciation of a stressed /∊/, the interpretation of the initial syllable as a content morpheme to elicit the pronunciation of a stressed /∊/, and the interpretation as a prefix to elicit the pronunciation of an unstressed /&lamda;/. We found both the pronunciation and the stress assignment for pseudowords to depend on word type, which shows morpheme boundaries and prefixes to be identified. However, the identification of prefixes could also be explained by the correspondence of the prefix boundaries in the pseudowords to syllable boundaries. To exclude this alternative explanation, a follow-up experiment with the same group of children was conducted using bisyllabic pseudowords containing prefixes that did not coincide with syllable boundaries versus similar pseudowords with no prefix. The results of the first experiment were replicated. That is, the children identified prefixes and shifted their assignment of word stress accordingly. The results are discussed with reference to a parallel dual-route model of word decoding
  • Vernes, S. C., Spiteri, E., Nicod, J., Groszer, M., Taylor, J. M., Davies, K. E., Geschwind, D., & Fisher, S. E. (2007). High-throughput analysis of promoter occupancy reveals direct neural targets of FOXP2, a gene mutated in speech and language disorders. American Journal of Human Genetics, 81(6), 1232-1250. doi:10.1086/522238.

    Abstract

    We previously discovered that mutations of the human FOXP2 gene cause a monogenic communication disorder, primarily characterized by difficulties in learning to make coordinated sequences of articulatory gestures that underlie speech. Affected people have deficits in expressive and receptive linguistic processing and display structural and/or functional abnormalities in cortical and subcortical brain regions. FOXP2 provides a unique window into neural processes involved in speech and language. In particular, its role as a transcription factor gene offers powerful functional genomic routes for dissecting critical neurogenetic mechanisms. Here, we employ chromatin immunoprecipitation coupled with promoter microarrays (ChIP-chip) to successfully identify genomic sites that are directly bound by FOXP2 protein in native chromatin of human neuron-like cells. We focus on a subset of downstream targets identified by this approach, showing that altered FOXP2 levels yield significant changes in expression in our cell-based models and that FOXP2 binds in a specific manner to consensus sites within the relevant promoters. Moreover, we demonstrate significant quantitative differences in target expression in embryonic brains of mutant mice, mediated by specific in vivo Foxp2-chromatin interactions. This work represents the first identification and in vivo verification of neural targets regulated by FOXP2. Our data indicate that FOXP2 has dual functionality, acting to either repress or activate gene expression at occupied promoters. The identified targets suggest roles in modulating synaptic plasticity, neurodevelopment, neurotransmission, and axon guidance and represent novel entry points into in vivo pathways that may be disturbed in speech and language disorders.
  • Von Stutterheim, C., Carroll, M., & Klein, W. (2003). Two ways of construing complex temporal structures. In F. Lenz (Ed.), Deictic conceptualization of space, time and person (pp. 97-133). Amsterdam: Benjamins.
  • Vonk, W., & Cozijn, R. (2007). Psycholinguïstiek: Een kwantitatieve wetenschap. Tijdschrift voor Nederlandse Taal- en Letterkunde, 123, 55-69.
  • Vonk, W., & Cozijn, R. (2003). On the treatment of saccades and regressions in eye movement measures of reading time. In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind's eye: Cognitive and applied aspects of eye movement research (pp. 291-312). Amsterdam: Elsevier.
  • Wagner, A., & Braun, A. (2003). Is voice quality language-dependent? Acoustic analyses based on speakers of three different languages. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 651-654). Adelaide: Causal Productions.
  • Waller, D., & Haun, D. B. M. (2003). Scaling techniques for modeling directional knowledge. Behavior Research Methods, Instruments, & Computers, 35(2), 285-293.

    Abstract

    A common way for researchers to model or graphically portray spatial knowledge of a large environment is by applying multidimensional scaling (MDS) to a set of pairwise distance estimations. We introduce two MDS-like techniques that incorporate people’s knowledge of directions instead of (or in addition to) their knowledge of distances. Maps of a familiar environment derived from these procedures were more accurate and were rated by participants as being more accurate than those derived from nonmetric MDS. By incorporating people’s relatively accurate knowledge of directions, these methods offer spatial cognition researchers and behavioral geographers a sharper analytical tool than MDS for studying cognitive maps.
  • Warner, N. (2003). Rapid perceptibility as a factor underlying universals of vowel inventories. In A. Carnie, H. Harley, & M. Willie (Eds.), Formal approaches to function in grammar, in honor of Eloise Jelinek (pp. 245-261). Amsterdam: Benjamins.
  • Wassenaar, M., & Hagoort, P. (2007). Thematic role assignment in patients with Broca's aphasia: Sentence-picture matching electrified. Neuropsychologia, 45(4), 716-740. doi:10.1016/j.neuropsychologia.2006.08.016.

    Abstract

    An event-related brain potential experiment was carried out to investigate on-line thematic role assignment during sentence–picture matching in patients with Broca's aphasia. Subjects were presented with a picture that was followed by an auditory sentence. The sentence either matched the picture or mismatched the visual information depicted. Sentences differed in complexity, and ranged from simple active semantically irreversible sentences to passive semantically reversible sentences. ERPs were recorded while subjects were engaged in sentence–picture matching. In addition, reaction time and accuracy were measured. Three groups of subjects were tested: Broca patients (N = 10), non-aphasic patients with a right hemisphere (RH) lesion (N = 8), and healthy aged-matched controls (N = 15). The results of this study showed that, in neurologically unimpaired individuals, thematic role assignment in the context of visual information was an immediate process. This in contrast to patients with Broca's aphasia who demonstrated no signs of on-line sensitivity to the picture–sentence mismatches. The syntactic contribution to the thematic role assignment process seemed to be diminished given the reduction and even absence of P600 effects. Nevertheless, Broca patients showed some off-line behavioral sensitivity to the sentence–picture mismatches. The long response latencies of Broca's aphasics make it likely that off-line response strategies were used.
  • Weber, A., & Smits, R. (2003). Consonant and vowel confusion patterns by American English listeners. In M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences.

    Abstract

    This study investigated the perception of American English phonemes by native listeners. Listeners identified either the consonant or the vowel in all possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (0 dB, 8 dB, and 16 dB). Effects of syllable position, signal-to-noise ratio, and articulatory features on vowel and consonant identification are discussed. The results constitute the largest source of data that is currently available on phoneme confusion patterns of American English phonemes by native listeners.
  • Weber, A., & Smits, R. (2003). Consonant and vowel confusion patterns by American English listeners. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 1437-1440). Adelaide: Causal Productions.

    Abstract

    This study investigated the perception of American English phonemes by native listeners. Listeners identified either the consonant or the vowel in all possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signalto-noise ratios (0 dB, 8 dB, and 16 dB). Effects of syllable position, signal-to-noise ratio, and articulatory features on vowel and consonant identification are discussed. The results constitute the largest source of data that is currently available on phoneme confusion patterns of American English phonemes by native listeners.
  • Weber, A. (1998). Listening to nonnative language which violates native assimilation rules. In D. Duez (Ed.), Proceedings of the European Scientific Communication Association workshop: Sound patterns of Spontaneous Speech (pp. 101-104).

    Abstract

    Recent studies using phoneme detection tasks have shown that spoken-language processing is neither facilitated nor interfered with by optional assimilation, but is inhibited by violation of obligatory assimilation. Interpretation of these results depends on an assessment of their generality, specifically, whether they also obtain when listeners are processing nonnative language. Two separate experiments are presented in which native listeners of German and native listeners of Dutch had to detect a target fricative in legal monosyllabic Dutch nonwords. All of the nonwords were correct realisations in standard Dutch. For German listeners, however, half of the nonwords contained phoneme strings which violate the German fricative assimilation rule. Whereas the Dutch listeners showed no significant effects, German listeners detected the target fricative faster when the German fricative assimilation was violated than when no violation occurred. The results might suggest that violation of assimilation rules does not have to make processing more difficult per se.
  • Weber, A., & Cutler, A. (2003). Perceptual similarity co-existing with lexical dissimilarity [Abstract]. Abstracts of the 146th Meeting of the Acoustical Society of America. Journal of the Acoustical Society of America, 114(4 Pt. 2), 2422. doi:10.1121/1.1601094.

    Abstract

    The extreme case of perceptual similarity is indiscriminability, as when two second‐language phonemes map to a single native category. An example is the English had‐head vowel contrast for Dutch listeners; Dutch has just one such central vowel, transcribed [E]. We examine whether the failure to discriminate in phonetic categorization implies indiscriminability in other—e.g., lexical—processing. Eyetracking experiments show that Dutch‐native listeners instructed in English to ‘‘click on the panda’’ look (significantly more than native listeners) at a pictured pencil, suggesting that pan‐ activates their lexical representation of pencil. The reverse, however, is not the case: ‘‘click on the pencil’’ does not induce looks to a panda, suggesting that pen‐ does not activate panda in the lexicon. Thus prelexically undiscriminated second‐language distinctions can nevertheless be maintained in stored lexical representations. The problem of mapping a resulting unitary input to two distinct categories in lexical representations is solved by allowing input to activate only one second‐language category. For Dutch listeners to English, this is English [E], as a result of which no vowels in the signal ever map to words containing [ae]. We suggest that the choice of category is here motivated by a more abstract, phonemic, metric of similarity.
  • Weber, A., Melinger, A., & Lara Tapia, L. (2007). The mapping of phonetic information to lexical presentations in Spanish: Evidence from eye movements. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1941-1944). Dudweiler: Pirrot.

    Abstract

    In a visual-world study, we examined spoken-wordrecognition in Spanish. Spanish listeners followed spoken instructions to click on pictures while their eye movements were monitored. When instructed to click on the picture of a door (puerta), they experienced interference from the picture of a pig (p u e r c o ). The same interference from phonologically related items was observed when the displays contained printed names or a combination of pictures with their names printed underneath, although the effect was strongest for displays with printed names. Implications of the finding that the interference effect can be induced with standard pictorial displays as well as with orthographic displays are discussed.
  • De Weert, C., & Levelt, W. J. M. (1976). Comparison of normal and dichoptic colour mixing. Vision Research, 16, 59-70. doi:10.1016/0042-6989(76)90077-8.

    Abstract

    Dichoptic mixtures of equiluminous components of different wavelengths were matched with a binocularly presented "monocular" mixture of appropriate chosen amounts of the same colour components. Stimuli were chosen from the region of 490-630 nm. Although satisfactory colour matches could be obtained, dichoptic mixtures differed from normal mixtures to a considerable extent. Midspectral stimuli tended to be more dominant in the dichoptic mixtures than either short or long wavelength stimuli. An attempt was made to describe the relation between monocular and dichoptic mixtures with one function containing a wavelength variable and an eye dominance parameter.
  • De Weert, C., & Levelt, W. J. M. (1976). Dichoptic brightness combinations for unequally coloured lights. Vision Research, 16, 1077-1086.
  • Wender, K. F., Haun, D. B. M., Rasch, B. H., & Blümke, M. (2003). Context effects in memory for routes. In C. Freksa, W. Brauer, C. Habel, & K. F. Wender (Eds.), Spatial cognition III: Routes and navigation, human memory and learning, spatial representation and spatial learning (pp. 209-231). Berlin: Springer.
  • Wheeldon, L. (2003). Inhibitory from priming of spoken word production. Language and Cognitive Processes, 18(1), 81-109. doi:10.1080/01690960143000470.

    Abstract

    Three experiments were designed to examine the effect on picture naming of the prior production of a word related in phonological form. In Experiment 1, the latency to produce Dutch words in response to pictures (e.g., hoed , hat) was longer following the production of a form-related word (e.g., hond , dog) in response to a definition on a preceding trial, than when the preceding definition elicited an unrelated word (e.g., kerk , church). Experiment 2 demonstrated that the inhibitory effect disappears when one unrelated word is produced intervening prime and target productions (e.g., hond-kerk-hoed ). The size of the inhibitory effect was not significantly affected by the frequency of the prime words or the target picture names. In Experiment 3, facilitation was observed for word pairs that shared offset segments (e.g., kurk-jurk , cork-dress), whereas inhibition was observed for shared onset segments (e.g., bloed-bloem , blood-flower). However, no priming was observed for prime and target words with shared phonemes but no mismatching segments (e.g., oom-boom , uncle-tree; hex-hexs , fence-witch). These findings are consistent with a process of phoneme competition during phonological encoding.
  • Wheeldon, L. R., & Levelt, W. J. M. (1995). Monitoring the time course of phonological encoding. Journal of Memory and Language, 34(3), 311-334. doi:10.1006/jmla.1995.1014.

    Abstract

    Three experiments examined the time course of phonological encoding in speech production. A new methodology is introduced in which subjects are required to monitor their internal speech production for prespecified target segments and syllables. Experiment 1 demonstrated that word initial target segments are monitored significantly faster than second syllable initial target segments. The addition of a concurrent articulation task (Experiment 1b) had a limited effect on performance, excluding the possibility that subjects are monitoring a subvocal articulation of the carrier word. Moreover, no relationship was observed between the pattern of monitoring latencies and the timing of the targets in subjects′ overt speech. Subjects are not, therefore, monitoring an internal phonetic representation of the carrier word. Experiment 2 used the production monitoring task to replicate the syllable monitoring effect observed in speech perception experiments: responses to targets were faster when they corresponded to the initial syllable of the carrier word than when they did not. We conclude that subjects are monitoring their internal generation of a syllabified phonological representation. Experiment 3 provides more detailed evidence concerning the time course of the generation of this representation by comparing monitoring latencies to targets within, as well as between, syllables. Some amendments to current models of phonological encoding are suggested in light of these results.
  • Wilkins, D. (1995). Towards a Socio-Cultural Profile of the Communities We Work With. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 70-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513481.

    Abstract

    Field data are drawn from a particular speech community at a certain place and time. The intent of this survey is to enrich understanding of the various socio-cultural contexts in which linguistic and “cognitive” data may have been collected, so that we can explore the role which societal, cultural and contextual factors may play in this material. The questionnaire gives guidelines concerning types of ethnographic information that are important to cross-cultural and cross-linguistic enquiry, and will be especially useful to researchers who do not have specialised training in anthropology.
  • Wilkins, D., Pederson, E., & Levinson, S. C. (1995). Background questions for the "enter"/"exit" research. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 14-16). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3003935.

    Abstract

    How do languages encode different kinds of movement, and what features do people pay attention to when describing motion events? This document outlines topics concerning the investigation of “enter” and “exit” events. It helps contextualise research tasks that examine this domain (see 'Motion Elicitation' and 'Enter/Exit animation') and gives some pointers about what other questions can be explored.
  • Wilkins, D., Kita, S., & Enfield, N. J. (2007). 'Ethnography of pointing' - field worker's guide. In A. Majid (Ed.), Field Manual Volume 10 (pp. 89-95). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492922.

    Abstract

    Pointing gestures are recognised to be a primary manifestation of human social cognition and communicative capacity. The goal of this task is to collect empirical descriptions of pointing practices in different cultural settings.
  • Wilkins, D. (1995). Motion elicitation: "moving 'in(to)'" and "moving 'out (of)'". In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 4-12). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3003391.

    Abstract

    How do languages encode different kinds of movement, and what features do people pay attention to when describing motion events? This task investigates the expression of “enter” and “exit” activities, that is, events involving motion in(to) and motion out (of) container-like items. The researcher first uses particular stimuli (a ball, a cup, rice, etc.) to elicit descriptions of enter/exit events from one consultant, and then asks another consultant to demonstrate the event based on these descriptions. See also the related entries Enter/Exit Animation and Background Questions for Enter/Exit Research.
  • Wilkins, D. P., & Hill, D. (1995). When "go" means "come": Questioning the basicness of basic motion verbs. Cognitive Linguistics, 6, 209-260. doi:10.1515/cogl.1995.6.2-3.209.

    Abstract

    The purpose of this paper is to question some of the basic assumpiions concerning motion verbs. In particular, it examines the assumption that "come" and "go" are lexical universals which manifest a universal deictic Opposition. Against the background offive working hypotheses about the nature of'come" and ''go", this study presents a comparative investigation of t wo unrelated languages—Mparntwe Arrernte (Pama-Nyungan, Australian) and Longgu (Oceanic, Austronesian). Although the pragmatic and deictic "suppositional" complexity of"come" and "go" expressions has long been recognized, we argue that in any given language the analysis of these expressions is much more semantically and systemically complex than has been assumed in the literature. Languages vary at the lexical semantic level äs t o what is entailed by these expressions, äs well äs differing äs t o what constitutes the prototype and categorial structure for such expressions. The data also strongly suggest that, ifthere is a lexical universal "go", then this cannof be an inherently deictic expression. However, due to systemic Opposition with "come", non-deictic "go" expressions often take on a deictic Interpretation through pragmatic attribution. Thus, this crosslinguistic investigation of "come" and "go" highlights the need to consider semantics and pragmatics äs modularly separate.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2007). When language meets action: The neural integration of gesture and speech. Cerebral Cortex, 17(10), 2322-2333. doi:10.1093/cercor/bhl141.

    Abstract

    Although generally studied in isolation, language and action often co-occur in everyday life. Here we investigated one particular form of simultaneous language and action, namely speech and gestures that speakers use in everyday communication. In a functional magnetic resonance imaging study, we identified the neural networks involved in the integration of semantic information from speech and gestures. Verbal and/or gestural content could be integrated easily or less easily with the content of the preceding part of speech. Premotor areas involved in action observation (Brodmann area [BA] 6) were found to be specifically modulated by action information "mismatching" to a language context. Importantly, an increase in integration load of both verbal and gestural information into prior speech context activated Broca's area and adjacent cortex (BA 45/47). A classical language area, Broca's area, is not only recruited for language-internal processing but also when action observation is integrated with speech. These findings provide direct evidence that action and language processing share a high-level neural integration system.
  • Willems, R. M., & Hagoort, P. (2007). Neural evidence for the interplay between language, gesture, and action: A review. Brain and Language, 101(3), 278-289. doi:10.1016/j.bandl.2007.03.004.

    Abstract

    Co-speech gestures embody a form of manual action that is tightly coupled to the language system. As such, the co-occurrence of speech and co-speech gestures is an excellent example of the interplay between language and action. There are, however, other ways in which language and action can be thought of as closely related. In this paper we will give an overview of studies in cognitive neuroscience that examine the neural underpinnings of links between language and action. Topics include neurocognitive studies of motor representations of speech sounds, action-related language, sign language and co-speech gestures. It will be concluded that there is strong evidence on the interaction between speech and gestures in the brain. This interaction however shares general properties with other domains in which there is interplay between language and action.
  • Willems, R. M. (2007). The neural construction of a Tinkertoy [‘Journal club’ review]. The Journal of Neuroscience, 27, 1509-1510. doi:10.1523/JNEUROSCI.0005-07.2007.
  • Wittek, A. (1998). Learning verb meaning via adverbial modification: Change-of-state verbs in German and the adverb "wieder" again. In A. Greenhill, M. Hughes, H. Littlefield, & H. Walsh (Eds.), Proceedings of the 22nd Annual Boston University Conference on Language Development (pp. 779-790). Somerville, MA: Cascadilla Press.
  • Wittenburg, P. (2003). The DOBES model of language documentation. Language Documentation and Description, 1, 122-139.
  • Womelsdorf, T., Schoffelen, J.-M., Oostenveld, R., Singer, W., Desimone, R., Engel, A. K., & Fries, P. (2007). Modulation of neuronal interactions through neuronal synchronization. Science, 316, 1609-1612. doi:10.1126/science.1139597.

    Abstract

    Brain processing depends on the interactions between neuronal groups. Those interactions are governed by the pattern of anatomical connections and by yet unknown mechanisms that modulate the effective strength of a given connection. We found that the mutual influence among neuronal groups depends on the phase relation between rhythmic activities within the groups. Phase relations supporting interactions between the groups preceded those interactions by a few milliseconds, consistent with a mechanistic role. These effects were specific in time, frequency, and space, and we therefore propose that the pattern of synchronization flexibly determines the pattern of neuronal interactions.
  • Zeshan, U. (2003). Aspects of Türk Işaret Dili (Turkish Sign Language). Sign Language and Linguistics, 6(1), 43-75. doi:10.1075/sll.6.1.04zes.

    Abstract

    This article provides a first overview of some striking grammatical structures in Türk Idotscedilaret Dili (Turkish Sign Language, TID), the sign language used by the Deaf community in Turkey. The data are described with a typological perspective in mind, focusing on aspects of TID grammar that are typologically unusual across sign languages. After giving an overview of the historical, sociolinguistic and educational background of TID and the language community using this sign language, five domains of TID grammar are investigated in detail. These include a movement derivation signalling completive aspect, three types of nonmanual negation — headshake, backward head tilt, and puffed cheeks — and their distribution, cliticization of the negator NOT to a preceding predicate host sign, an honorific whole-entity classifier used to refer to humans, and a question particle, its history and current status in the language. A final evaluation points out the significance of these data for sign language research and looks at perspectives for a deeper understanding of the language and its history.
  • Ziegler, A., DeStefano, A. L., König, I. R., Bardel, C., Brinza, D., Bull, S., Cai, Z., Glaser, B., Jiang, W., Lee, K. E., Li, C. X., Li, J., Li, X., Majoram, P., Meng, Y., Nicodemus, K. K., Platt, A., Schwarz, D. F., Shi, W., Shugart, Y. Y. and 7 moreZiegler, A., DeStefano, A. L., König, I. R., Bardel, C., Brinza, D., Bull, S., Cai, Z., Glaser, B., Jiang, W., Lee, K. E., Li, C. X., Li, J., Li, X., Majoram, P., Meng, Y., Nicodemus, K. K., Platt, A., Schwarz, D. F., Shi, W., Shugart, Y. Y., Stassen, H. H., Sun, Y. V., Won, S., Wang, W., Wahba, G., Zagaar, U. A., & Zhao, Z. (2007). Data mining, neural nets, trees–problems 2 and 3 of Genetic Analysis Workshop 15. Genetic Epidemiology, 31(Suppl 1), S51-S60. doi:10.1002/gepi.20280.

    Abstract

    Genome-wide association studies using thousands to hundreds of thousands of single nucleotide polymorphism (SNP) markers and region-wide association studies using a dense panel of SNPs are already in use to identify disease susceptibility genes and to predict disease risk in individuals. Because these tasks become increasingly important, three different data sets were provided for the Genetic Analysis Workshop 15, thus allowing examination of various novel and existing data mining methods for both classification and identification of disease susceptibility genes, gene by gene or gene by environment interaction. The approach most often applied in this presentation group was random forests because of its simplicity, elegance, and robustness. It was used for prediction and for screening for interesting SNPs in a first step. The logistic tree with unbiased selection approach appeared to be an interesting alternative to efficiently select interesting SNPs. Machine learning, specifically ensemble methods, might be useful as pre-screening tools for large-scale association studies because they can be less prone to overfitting, can be less computer processor time intensive, can easily include pair-wise and higher-order interactions compared with standard statistical approaches and can also have a high capability for classification. However, improved implementations that are able to deal with hundreds of thousands of SNPs at a time are required.
  • Zwitserlood, I. (2003). Classifying hand configurations in Nederlandse Gebarentaal (Sign Language of the Netherlands). PhD Thesis, LOT, Utrecht. Retrieved from http://igitur-archive.library.uu.nl/dissertations/2003-0717-122837/UUindex.html.

    Abstract

    This study investigates the morphological and morphosyntactic characteristics of hand configurations in signs, particularly in Nederlandse Gebarentaal (NGT). The literature on sign languages in general acknowledges that hand configurations can function as morphemes, more specifically as classifiers , in a subset of signs: verbs expressing the motion, location, and existence of referents (VELMs). These verbs are considered the output of productive sign formation processes. In contrast, other signs in which similar hand configurations appear ( iconic or motivated signs) have been considered to be lexicalized signs, not involving productive processes. This research report shows that meaningful hand configurations have (at least) two very different functions in the grammar of NGT (and presumably in other sign languages, too). First, they are agreement markers on VELMs, and hence are functional elements. Second, they are roots in motivated signs, and thus lexical elements. The latter signs are analysed as root compounds and are formed from various roots by productive processes. The similarities in surface form and differences in morphosyntactic characteristics observed in comparison of VELMs and root compounds are attributed to their different structures and to the sign language interface between grammar and phonetic form
  • Zwitserlood, I. (2003). Word formation below and above little x: Evidence from Sign Language of the Netherlands. In Proceedings of SCL 19. Nordlyd Tromsø University Working Papers on Language and Linguistics (pp. 488-502).

    Abstract

    Although in many respects sign languages have a similar structure to that of spoken languages, the different modalities in which both types of languages are expressed cause differences in structure as well. One of the most striking differences between spoken and sign languages is the influence of the interface between grammar and PF on the surface form of utterances. Spoken language words and phrases are in general characterized by sequential strings of sounds, morphemes and words, while in sign languages we find that many phonemes, morphemes, and even words are expressed simultaneously. A linguistic model should be able to account for the structures that occur in both spoken and sign languages. In this paper, I will discuss the morphological/ morphosyntactic structure of signs in Nederlandse Gebarentaal (Sign Language of the Netherlands, henceforth NGT), with special focus on the components ‘place of articulation’ and ‘handshape’. I will focus on their multiple functions in the grammar of NGT and argue that the framework of Distributed Morphology (DM), which accounts for word formation in spoken languages, is also suited to account for the formation of structures in sign languages. First I will introduce the phonological and morphological structure of NGT signs. Then, I will briefly outline the major characteristics of the DM framework. Finally, I will account for signs that have the same surface form but have a different morphological structure by means of that framework.

Share this page