Publications

Displaying 1201 - 1243 of 1243
  • Vuong, L., & Martin, R. C. (2011). LIFG-based attentional control and the resolution of lexical ambiguities in sentence context. Brain and Language, 116, 22-32. doi:10.1016/j.bandl.2010.09.012.

    Abstract

    The role of attentional control in lexical ambiguity resolution was examined in two patients with damage to the left inferior frontal gyrus (LIFG) and one control patient with non-LIFG damage. Experiment 1 confirmed that the LIFG patients had attentional control deficits compared to normal controls while the non-LIFG patient was relatively unimpaired. Experiment 2 showed that all three patients did as well as normal controls in using biasing sentence context to resolve lexical ambiguities involving balanced ambiguous words, but only the LIFG patients took an abnormally long time on lexical ambiguities that resolved toward a subordinate meaning of biased ambiguous words. Taken together, the results suggest that attentional control plays an important role in the resolution of certain lexical ambiguities – those that induce strong interference from context-inappropriate meanings (i.e., dominant meanings of biased ambiguous words).
  • Waller, D., & Haun, D. B. M. (2003). Scaling techniques for modeling directional knowledge. Behavior Research Methods, Instruments, & Computers, 35(2), 285-293.

    Abstract

    A common way for researchers to model or graphically portray spatial knowledge of a large environment is by applying multidimensional scaling (MDS) to a set of pairwise distance estimations. We introduce two MDS-like techniques that incorporate people’s knowledge of directions instead of (or in addition to) their knowledge of distances. Maps of a familiar environment derived from these procedures were more accurate and were rated by participants as being more accurate than those derived from nonmetric MDS. By incorporating people’s relatively accurate knowledge of directions, these methods offer spatial cognition researchers and behavioral geographers a sharper analytical tool than MDS for studying cognitive maps.
  • Wang, L. (2011). The influence of information structure on language comprehension: A neurocognitive perspective. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2011). The influence of information structure on the depth of semantic processing: How focus and pitch accent determine the size of the N400 effect. Neuropsychologia, 49, 813-820. doi:10.1016/j.neuropsychologia.2010.12.035.

    Abstract

    To highlight relevant information in dialogues, both wh-question context and pitch accent in answers can be used, such that focused information gains more attention and is processed more elaborately. To evaluate the relative influence of context and pitch accent on the depth of semantic processing, we measured Event-Related Potentials (ERPs) to auditorily presented wh-question-answer pairs. A semantically incongruent word in the answer occurred either in focus or non-focus position as determined by the context, and this word was either accented or unaccented. Semantic incongruency elicited different N400 effects in different conditions. The largest N400 effect was found when the question-marked focus was accented, while the other three conditions elicited smaller N400 effects. The results suggest that context and accentuation interact. Thus accented focused words were processed more deeply compared to conditions where focus and accentuation mismatched, or when the new information had no marking. In addition, there seems to be sex differences in the depth of semantic processing. For males, a significant N400 effect was observed only when the question-marked focus was accented, reduced N400 effects were found in the other dialogues. In contrast, females produced similar N400 effects in all the conditions. These results suggest that regardless of external cues, females tend to engage in more elaborate semantic processing compared to males.
  • Warner, N. (2003). Rapid perceptibility as a factor underlying universals of vowel inventories. In A. Carnie, H. Harley, & M. Willie (Eds.), Formal approaches to function in grammar, in honor of Eloise Jelinek (pp. 245-261). Amsterdam: Benjamins.
  • Warner, N., & Cutler, A. (2017). Stress effects in vowel perception as a function of language-specific vocabulary patterns. Phonetica, 74, 81-106. doi:10.1159/000447428.

    Abstract

    Background/Aims: Evidence from spoken word recognition suggests that for English listeners, distinguishing full versus reduced vowels is important, but discerning stress differences involving the same full vowel (as in mu- from music or museum) is not. In Dutch, in contrast, the latter distinction is important. This difference arises from the relative frequency of unstressed full vowels in the two vocabularies. The goal of this paper is to determine how this difference in the lexicon influences the perception of stressed versus unstressed vowels. Methods: All possible sequences of two segments (diphones) in Dutch and in English were presented to native listeners in gated fragments. We recorded identification performance over time throughout the speech signal. The data were here analysed specifically for patterns in perception of stressed versus unstressed vowels. Results: The data reveal significantly larger stress effects (whereby unstressed vowels are harder to identify than stressed vowels) in English than in Dutch. Both language-specific and shared patterns appear regarding which vowels show stress effects. Conclusion: We explain the larger stress effect in English as reflecting the processing demands caused by the difference in use of unstressed vowels in the lexicon. The larger stress effect in English is due to relative inexperience with processing unstressed full vowels
  • Wassenaar, M., & Hagoort, P. (2007). Thematic role assignment in patients with Broca's aphasia: Sentence-picture matching electrified. Neuropsychologia, 45(4), 716-740. doi:10.1016/j.neuropsychologia.2006.08.016.

    Abstract

    An event-related brain potential experiment was carried out to investigate on-line thematic role assignment during sentence–picture matching in patients with Broca's aphasia. Subjects were presented with a picture that was followed by an auditory sentence. The sentence either matched the picture or mismatched the visual information depicted. Sentences differed in complexity, and ranged from simple active semantically irreversible sentences to passive semantically reversible sentences. ERPs were recorded while subjects were engaged in sentence–picture matching. In addition, reaction time and accuracy were measured. Three groups of subjects were tested: Broca patients (N = 10), non-aphasic patients with a right hemisphere (RH) lesion (N = 8), and healthy aged-matched controls (N = 15). The results of this study showed that, in neurologically unimpaired individuals, thematic role assignment in the context of visual information was an immediate process. This in contrast to patients with Broca's aphasia who demonstrated no signs of on-line sensitivity to the picture–sentence mismatches. The syntactic contribution to the thematic role assignment process seemed to be diminished given the reduction and even absence of P600 effects. Nevertheless, Broca patients showed some off-line behavioral sensitivity to the sentence–picture mismatches. The long response latencies of Broca's aphasics make it likely that off-line response strategies were used.
  • Weber, A., & Cutler, A. (2003). Perceptual similarity co-existing with lexical dissimilarity [Abstract]. Abstracts of the 146th Meeting of the Acoustical Society of America. Journal of the Acoustical Society of America, 114(4 Pt. 2), 2422. doi:10.1121/1.1601094.

    Abstract

    The extreme case of perceptual similarity is indiscriminability, as when two second‐language phonemes map to a single native category. An example is the English had‐head vowel contrast for Dutch listeners; Dutch has just one such central vowel, transcribed [E]. We examine whether the failure to discriminate in phonetic categorization implies indiscriminability in other—e.g., lexical—processing. Eyetracking experiments show that Dutch‐native listeners instructed in English to ‘‘click on the panda’’ look (significantly more than native listeners) at a pictured pencil, suggesting that pan‐ activates their lexical representation of pencil. The reverse, however, is not the case: ‘‘click on the pencil’’ does not induce looks to a panda, suggesting that pen‐ does not activate panda in the lexicon. Thus prelexically undiscriminated second‐language distinctions can nevertheless be maintained in stored lexical representations. The problem of mapping a resulting unitary input to two distinct categories in lexical representations is solved by allowing input to activate only one second‐language category. For Dutch listeners to English, this is English [E], as a result of which no vowels in the signal ever map to words containing [ae]. We suggest that the choice of category is here motivated by a more abstract, phonemic, metric of similarity.
  • Weber, A., Broersma, M., & Aoyagi, M. (2011). Spoken-word recognition in foreign-accented speech by L2 listeners. Journal of Phonetics, 39, 479-491. doi:10.1016/j.wocn.2010.12.004.

    Abstract

    Two cross-modal priming studies investigated the recognition of English words spoken with a foreign accent. Auditory English primes were either typical of a Dutch accent or typical of a Japanese accent in English and were presented to both Dutch and Japanese L2 listeners. Lexical-decision times to subsequent visual target words revealed that foreign-accented words can facilitate word recognition for L2 listeners if at least one of two requirements is met: the foreign-accented production is in accordance with the language background of the L2 listener, or the foreign accent is perceptually confusable with the standard pronunciation for the L2 listener. If neither one of the requirements is met, no facilitatory effect of foreign accents on L2 word recognition is found. Taken together, these findings suggest that linguistic experience with a foreign accent affects the ability to recognize words carrying this accent, and there is furthermore a general benefit for L2 listeners for recognizing foreign-accented words that are perceptually confusable with the standard pronunciation.
  • Wegener, C. (2011). Expression of reciprocity in Savosavo. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 213-224). Amsterdam: Benjamins.

    Abstract

    This paper describes how reciprocity is expressed in the Papuan (i.e. non-Austronesian­) language Savosavo, spoken in the Solomon Islands. The main strategy is to use the reciprocal nominal mapamapa, which can occur in different NP positions and always triggers default third person singular masculine agreement, regardless of the number and gender of the referents. After a description of this as well as another strategy that is occasionally used (the ‘joint activity construction’), the paper will provide a detailed analysis of data elicited with set of video stimuli and show that the main strategy is used to describe even clearly asymmetric situations, as long as more than one person acts on more than one person in a joint activity.
  • Wegman, J., Tyborowska, A., Hoogman, M., Vasquez, A. A., & Janzen, G. (2017). The brain-derived neurotrophic factor Val66Met polymorphism affects encoding of object locations during active navigation. European Journal of Neuroscience, 45(12), 1501-1511. doi:10.1111/ejn.13416.

    Abstract

    The brain-derived neurotrophic factor (BDNF) was shown to be involved in spatial memory and spatial strategy preference. A naturally occurring single nucleotide polymorphism of the BDNF gene (Val66Met) affects activity-dependent secretion of BDNF. The current event-related fMRI study on preselected groups of ‘Met’ carriers and homozygotes of the ‘Val’ allele investigated the role of this polymorphism on encoding and retrieval in a virtual navigation task in 37 healthy volunteers. In each trial, participants navigated toward a target object. During encoding, three positional cues (columns) with directional cues (shadows) were available. During retrieval, the invisible target had to be replaced while either two objects without shadows (objects trial) or one object with a shadow (shadow trial) were available. The experiment consisted of blocks, informing participants of which trial type would be most likely to occur during retrieval. We observed no differences between genetic groups in task performance or time to complete the navigation tasks. The imaging results show that Met carriers compared to Val homozygotes activate the left hippocampus more during successful object location memory encoding. The observed effects were independent of non-significant performance differences or volumetric differences in the hippocampus. These results indicate that variations of the BDNF gene affect memory encoding during spatial navigation, suggesting that lower levels of BDNF in the hippocampus results in less efficient spatial memory processing
  • Wender, K. F., Haun, D. B. M., Rasch, B. H., & Blümke, M. (2003). Context effects in memory for routes. In C. Freksa, W. Brauer, C. Habel, & K. F. Wender (Eds.), Spatial cognition III: Routes and navigation, human memory and learning, spatial representation and spatial learning (pp. 209-231). Berlin: Springer.
  • Wheeldon, L. (2003). Inhibitory from priming of spoken word production. Language and Cognitive Processes, 18(1), 81-109. doi:10.1080/01690960143000470.

    Abstract

    Three experiments were designed to examine the effect on picture naming of the prior production of a word related in phonological form. In Experiment 1, the latency to produce Dutch words in response to pictures (e.g., hoed , hat) was longer following the production of a form-related word (e.g., hond , dog) in response to a definition on a preceding trial, than when the preceding definition elicited an unrelated word (e.g., kerk , church). Experiment 2 demonstrated that the inhibitory effect disappears when one unrelated word is produced intervening prime and target productions (e.g., hond-kerk-hoed ). The size of the inhibitory effect was not significantly affected by the frequency of the prime words or the target picture names. In Experiment 3, facilitation was observed for word pairs that shared offset segments (e.g., kurk-jurk , cork-dress), whereas inhibition was observed for shared onset segments (e.g., bloed-bloem , blood-flower). However, no priming was observed for prime and target words with shared phonemes but no mismatching segments (e.g., oom-boom , uncle-tree; hex-hexs , fence-witch). These findings are consistent with a process of phoneme competition during phonological encoding.
  • Whitehouse, A. J., Bishop, D. V., Ang, Q., Pennell, C. E., & Fisher, S. E. (2011). CNTNAP2 variants affect early language development in the general population. Genes, Brain and Behavior, 10, 451-456. doi:10.1111/j.1601-183X.2011.00684.x.

    Abstract

    Early language development is known to be under genetic influence, but the genes affecting normal variation in the general population remain largely elusive. Recent studies of disorder reported that variants of the CNTNAP2 gene are associated both with language deficits in specific language impairment (SLI) and with language delays in autism. We tested the hypothesis that these CNTNAP2 variants affect communicative behavior, measured at 2 years of age in a large epidemiological sample, the Western Australian Pregnancy Cohort (Raine) Study. Singlepoint analyses of 1149 children (606 males, 543 emales) revealed patterns of association which were strikingly reminiscent of those observed in previous investigations of impaired language, centered on the same genetic markers, and with a consistent direction of effect (rs2710102, p = .0239; rs759178, p = .0248). Based on these findings we performed analyses of four-marker haplotypes of rs2710102- s759178-rs17236239-rs2538976, and identified significant association (haplotype TTAA, p = .049; haplotype GCAG, p = .0014). Our study suggests that common variants in the exon 13-15 region of CNTNAP2 influence early language acquisition, as assessed at age 2, in the general population. We propose that these CNTNAP2 variants increase susceptibility to SLI or autism when they occur together with other risk factors.

    Additional information

    Whitehouse_Additional_Information.doc
  • Wiese, R., Orzechowska, P., Alday, P. M., & Ulbrich, C. (2017). Structural Principles or Frequency of Use? An ERP Experiment on the Learnability of Consonant Clusters. Frontiers in Psychology, 7: 2005. doi:10.3389/fpsyg.2016.02005.

    Abstract

    Phonological knowledge of a language involves knowledge about which segments can be combined under what conditions. Languages vary in the quantity and quality of licensed combinations, in particular sequences of consonants, with Polish being a language with a large inventory of such combinations. The present paper reports on a two-session experiment in which Polish-speaking adult participants learned nonce words with final consonant clusters. The aim was to study the role of two factors which potentially play a role in the learning of phonotactic structures: the phonological principle of sonority (ordering sound segments within the syllable according to their inherent loudness) and the (non-) existence as a usage-based phenomenon. EEG responses in two different time windows (adversely to behavioral responses) show linguistic processing by native speakers of Polish to be sensitive to both distinctions, in spite of the fact that Polish is rich in sonority-violating clusters. In particular, a general learning effect in terms of an N400 effect was found which was demonstrated to be different for sonority-obeying clusters than for sonority-violating clusters. Furthermore, significant interactions of formedness and session, and of existence and session, demonstrate that both factors, the sonority principle and the frequency pattern, play a role in the learning process.
  • Wilkin, K., & Holler, J. (2011). Speakers’ use of ‘action’ and ‘entity’ gestures with definite and indefinite references. In G. Stam, & M. Ishino (Eds.), Integrating gestures: The interdisciplinary nature of gesture (pp. 293-308). Amsterdam: John Benjamins.

    Abstract

    Common ground is an essential prerequisite for coordination in social interaction, including language use. When referring back to a referent in discourse, this referent is ‘given information’ and therefore in the interactants’ common ground. When a referent is being referred to for the first time, a speaker introduces ‘new information’. The analyses reported here are on gestures that accompany such references when they include definite and indefinite grammatical determiners. The main finding from these analyses is that referents referred to by definite and indefinite articles were equally often accompanied by gesture, but speakers tended to accompany definite references with gestures focusing on action information and indefinite references with gestures focusing on entity information. The findings suggest that speakers use speech and gesture together to design utterances appropriate for speakers with whom they share common ground.

    Files private

    Request files
  • Wilkins, D., Kita, S., & Enfield, N. J. (2007). 'Ethnography of pointing' - field worker's guide. In A. Majid (Ed.), Field Manual Volume 10 (pp. 89-95). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492922.

    Abstract

    Pointing gestures are recognised to be a primary manifestation of human social cognition and communicative capacity. The goal of this task is to collect empirical descriptions of pointing practices in different cultural settings.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2007). When language meets action: The neural integration of gesture and speech. Cerebral Cortex, 17(10), 2322-2333. doi:10.1093/cercor/bhl141.

    Abstract

    Although generally studied in isolation, language and action often co-occur in everyday life. Here we investigated one particular form of simultaneous language and action, namely speech and gestures that speakers use in everyday communication. In a functional magnetic resonance imaging study, we identified the neural networks involved in the integration of semantic information from speech and gestures. Verbal and/or gestural content could be integrated easily or less easily with the content of the preceding part of speech. Premotor areas involved in action observation (Brodmann area [BA] 6) were found to be specifically modulated by action information "mismatching" to a language context. Importantly, an increase in integration load of both verbal and gestural information into prior speech context activated Broca's area and adjacent cortex (BA 45/47). A classical language area, Broca's area, is not only recruited for language-internal processing but also when action observation is integrated with speech. These findings provide direct evidence that action and language processing share a high-level neural integration system.
  • Willems, R. M., Labruna, L., D'Esposito, M., Ivry, R., & Casasanto, D. (2011). A functional role for the motor system in language understanding: Evidence from Theta-Burst Transcranial Magnetic Stimulation. Psychological Science, 22, 849 -854. doi:10.1177/0956797611412387.

    Abstract

    Does language comprehension depend, in part, on neural systems for action? In previous studies, motor areas of the brain were activated when people read or listened to action verbs, but it remains unclear whether such activation is functionally relevant for comprehension. In the experiments reported here, we used off-line theta-burst transcranial magnetic stimulation to investigate whether a causal relationship exists between activity in premotor cortex and action-language understanding. Right-handed participants completed a lexical decision task, in which they read verbs describing manual actions typically performed with the dominant hand (e.g., “to throw,” “to write”) and verbs describing nonmanual actions (e.g., “to earn,” “to wander”). Responses to manual-action verbs (but not to nonmanual-action verbs) were faster after stimulation of the hand area in left premotor cortex than after stimulation of the hand area in right premotor cortex. These results suggest that premotor cortex has a functional role in action-language understanding.

    Additional information

    Supplementary materials Willems.pdf
  • Willems, R. M., Clevis, K., & Hagoort, P. (2011). Add a picture for suspense: Neural correlates of the interaction between language and visual information in the perception of fear. Social, Cognitive and Affective Neuroscience, 6, 404-416. doi:10.1093/scan/nsq050.

    Abstract

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.
  • Willems, R. M., Benn, Y., Hagoort, P., Tonia, I., & Varley, R. (2011). Communicating without a functioning language system: Implications for the role of language in mentalizing. Neuropsychologia, 49, 3130-3135. doi:10.1016/j.neuropsychologia.2011.07.023.

    Abstract

    A debated issue in the relationship between language and thought is how our linguistic abilities are involved in understanding the intentions of others (‘mentalizing’). The results of both theoretical and empirical work have been used to argue that linguistic, and more specifically, grammatical, abilities are crucial in representing the mental states of others. Here we contribute to this debate by investigating how damage to the language system influences the generation and understanding of intentional communicative behaviors. Four patients with pervasive language difficulties (severe global or agrammatic aphasia) engaged in an experimentally controlled non-verbal communication paradigm, which required signaling and understanding a communicative message. Despite their profound language problems they were able to engage in recipient design as well as intention recognition, showing similar indicators of mentalizing as have been observed in the neurologically healthy population. Our results show that aspects of the ability to communicate remain present even when core capacities of the language system are dysfunctional
  • Willems, R. M., & Hagoort, P. (2007). Neural evidence for the interplay between language, gesture, and action: A review. Brain and Language, 101(3), 278-289. doi:10.1016/j.bandl.2007.03.004.

    Abstract

    Co-speech gestures embody a form of manual action that is tightly coupled to the language system. As such, the co-occurrence of speech and co-speech gestures is an excellent example of the interplay between language and action. There are, however, other ways in which language and action can be thought of as closely related. In this paper we will give an overview of studies in cognitive neuroscience that examine the neural underpinnings of links between language and action. Topics include neurocognitive studies of motor representations of speech sounds, action-related language, sign language and co-speech gestures. It will be concluded that there is strong evidence on the interaction between speech and gestures in the brain. This interaction however shares general properties with other domains in which there is interplay between language and action.
  • Willems, R. M., & Casasanto, D. (2011). Flexibility in embodied language understanding. Frontiers in Psychology, 2, 116. doi:10.3389/fpsyg.2011.00116.

    Abstract

    Do people use sensori-motor cortices to understand language? Here we review neurocognitive studies of language comprehension in healthy adults and evaluate their possible contributions to theories of language in the brain. We start by sketching the minimal predictions that an embodied theory of language understanding makes for empirical research, and then survey studies that have been offered as evidence for embodied semantic representations. We explore four debated issues: first, does activation of sensori-motor cortices during action language understanding imply that action semantics relies on mirror neurons? Second, what is the evidence that activity in sensori-motor cortices plays a functional role in understanding language? Third, to what extent do responses in perceptual and motor areas depend on the linguistic and extra-linguistic context? And finally, can embodied theories accommodate language about abstract concepts? Based on the available evidence, we conclude that sensori-motor cortices are activated during a variety of language comprehension tasks, for both concrete and abstract language. Yet, this activity depends on the context in which perception and action words are encountered. Although modality-specific cortical activity is not a sine qua non of language processing even for language about perception and action, sensori-motor regions of the brain appear to make functional contributions to the construction of meaning, and should therefore be incorporated into models of the neurocognitive architecture of language.
  • Willems, R. M. (2011). Re-appreciating the why of cognition: 35 years after Marr and Poggio. Frontiers in Psychology, 2, 244. doi:10.3389/fpsyg.2011.00244.

    Abstract

    Marr and Poggio’s levels of description are one of the most well-known theoretical constructs of twentieth century cognitive science. It entails that behavior can and should be considered at three different levels: computation, algorithm, and implementation. In this contribution focus is on the computational level of description, the level that describes the “why” of cognition. I argue that the computational level should be taken as a starting point in devising experiments in cognitive (neuro)science. Instead, the starting point in empirical practice often is a focus on the stimulus or on some capacity of the cognitive system. The “why” of cognition tends to be ignored when designing research, and is not considered in subsequent inference from experimental results. The overall aim of this manuscript is to show how re-appreciation of the computational level of description as a starting point for experiments can lead to more informative experimentation.
  • Willems, R. M. (2007). The neural construction of a Tinkertoy [‘Journal club’ review]. The Journal of Neuroscience, 27, 1509-1510. doi:10.1523/JNEUROSCI.0005-07.2007.
  • Wittenburg, P. (2003). The DOBES model of language documentation. Language Documentation and Description, 1, 122-139.
  • Wnuk, E., De Valk, J. M., Huisman, J. L. A., & Majid, A. (2017). Hot and cold smells: Odor-temperature associations across cultures. Frontiers in Psychology, 8: 1373. doi:10.3389/fpsyg.2017.01373.

    Abstract

    It is often assumed odors are associated with hot and cold temperature, since odor processing may trigger thermal sensations, such as coolness in the case of mint. It is unknown, however, whether people make consistent temperature associations for a variety of everyday odors, and, if so, what determines them. Previous work investigating the bases of cross-modal associations suggests a number of possibilities, including universal forces (e.g., perception), as well as culture-specific forces (e.g., language and cultural beliefs). In this study, we examined odor-temperature associations in three cultures—Maniq (N = 11), Thai (N = 24), and Dutch (N = 24)—who differ with respect to their cultural preoccupation with odors, their odor lexicons, and their beliefs about the relationship of odors (and odor objects) to temperature. Participants matched 15 odors to temperature by touching cups filled with hot or cold water, and described the odors in their native language. The results showed no consistent associations among the Maniq, and only a handful of consistent associations between odor and temperature among the Thai and Dutch. The consistent associations differed across the two groups, arguing against their universality. Further analysis revealed cross-modal associations could not be explained by language, but could be the result of cultural beliefs
  • Womelsdorf, T., Schoffelen, J.-M., Oostenveld, R., Singer, W., Desimone, R., Engel, A. K., & Fries, P. (2007). Modulation of neuronal interactions through neuronal synchronization. Science, 316, 1609-1612. doi:10.1126/science.1139597.

    Abstract

    Brain processing depends on the interactions between neuronal groups. Those interactions are governed by the pattern of anatomical connections and by yet unknown mechanisms that modulate the effective strength of a given connection. We found that the mutual influence among neuronal groups depends on the phase relation between rhythmic activities within the groups. Phase relations supporting interactions between the groups preceded those interactions by a few milliseconds, consistent with a mechanistic role. These effects were specific in time, frequency, and space, and we therefore propose that the pattern of synchronization flexibly determines the pattern of neuronal interactions.
  • Wong, M. M. K., Watson, L. M., & Becker, E. B. E. (2017). Recent advances in modelling of cerebellar ataxia using induced pluripotent stem cells. Journal of Neurology & Neuromedicine, 2(7), 11-15. doi:10.29245/2572.942X/2017/7.1134.

    Abstract

    The cerebellar ataxias are a group of incurable brain disorders that are caused primarily by the progressive dysfunction and degeneration of cerebellar Purkinje cells. The lack of reliable disease models for the heterogeneous ataxias has hindered the understanding of the underlying pathogenic mechanisms as well as the development of effective therapies for these devastating diseases. Recent advances in the field of induced pluripotent stem cell (iPSC) technology offer new possibilities to better understand and potentially reverse disease pathology. Given the neurodevelopmental phenotypes observed in several types of ataxias, iPSC-based models have the potential to provide significant insights into disease progression, as well as opportunities for the development of early intervention therapies. To date, however, very few studies have successfully used iPSC-derived cells to cerebellar ataxias. In this review, we focus on recent breakthroughs in generating human iPSC-derived Purkinje cells. We also highlight the future challenges that will need to be addressed in order to fully exploit these models for the modelling of the molecular mechanisms underlying cerebellar ataxias and the development of effective therapeutics.
  • Yager, J., & Burenhult, N. (2017). Jedek: a newly discovered Aslian variety of Malaysia. Linguistic Typology, 21(3), 493-545. doi:10.1515/lingty-2017-0012.

    Abstract

    Jedek is a previously unrecognized variety of the Northern Aslian subgroup of the Aslian branch of the Austroasiatic language family. It is spoken by c. 280 individuals in the resettlement area of Sungai Rual, near Jeli in Kelantan state, Peninsular Malaysia. The community originally consisted of several bands of foragers along the middle reaches of the Pergau river. Jedek’s distinct status first became known during a linguistic survey carried out in the DOBES project Tongues of the Semang (2005-2011). This paper describes the process leading up to its discovery and provides an overview of its typological characteristics.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., & Hino, Y. (2017). The phonological unit of Japanese Kanji compounds: A masked priming investigation. Journal of Experimental Psychology: Human Perception and Performance, 43(7), 1303-1328. doi:10.1037/xhp0000374.

    Abstract

    Using the masked priming paradigm, we examined which phonological unit is used when naming Kanji compounds. Although the phonological unit in the Japanese language has been suggested to be the mora, Experiment 1 found no priming for mora-related Kanji prime-target pairs. In Experiment 2, significant priming was only found when Kanji pairs shared the whole sound of their initial Kanji characters. Nevertheless, when the same Kanji pairs used in Experiment 2 were transcribed into Kana, significant mora priming was observed in Experiment 3. In Experiment 4, matching the syllable structure and pitch-accent of the initial Kanji characters did not lead to mora priming, ruling out potential alternative explanations for the earlier absence of the effect. A significant mora priming effect was observed, however, when the shared initial mora constituted the whole sound of their initial Kanji characters in Experiments 5. Lastly, these results were replicated in Experiment 6. Overall, these results indicate that the phonological unit involved when naming Kanji compounds is not the mora but the whole sound of each Kanji character. We discuss how different phonological units may be involved when processing Kanji and Kana words as well as the implications for theories dealing with language production processes.
  • Zeshan, U. (2003). Aspects of Türk Işaret Dili (Turkish Sign Language). Sign Language and Linguistics, 6(1), 43-75. doi:10.1075/sll.6.1.04zes.

    Abstract

    This article provides a first overview of some striking grammatical structures in Türk Idotscedilaret Dili (Turkish Sign Language, TID), the sign language used by the Deaf community in Turkey. The data are described with a typological perspective in mind, focusing on aspects of TID grammar that are typologically unusual across sign languages. After giving an overview of the historical, sociolinguistic and educational background of TID and the language community using this sign language, five domains of TID grammar are investigated in detail. These include a movement derivation signalling completive aspect, three types of nonmanual negation — headshake, backward head tilt, and puffed cheeks — and their distribution, cliticization of the negator NOT to a preceding predicate host sign, an honorific whole-entity classifier used to refer to humans, and a question particle, its history and current status in the language. A final evaluation points out the significance of these data for sign language research and looks at perspectives for a deeper understanding of the language and its history.
  • Zeshan, U., & Panda, S. (2011). Reciprocals constructions in Indo-Pakistani sign language. In N. Evans, & A. Gaby (Eds.), Reciprocals and semantic typology (pp. 91-113). Amsterdam: Benjamins.

    Abstract

    Indo-Pakistani Sign Language (IPSL) is the sign language used by deaf communities in a large region across India and Pakistan. This visual-gestural language has a dedicated construction for specifically expressing reciprocal relationships, which can be applied to agreement verbs and to auxiliaries. The reciprocal construction relies on a change in the movement pattern of the signs it applies to. In addition, IPSL has a number of other strategies which can have a reciprocal interpretation, and the IPSL lexicon includes a good number of inherently reciprocal signs. All reciprocal expressions can be modified in complex ways that rely on the grammatical use of the sign space. Considering grammaticalisation and lexicalisation processes linking some of these constructions is also important for a better understanding of reciprocity in IPSL.
  • Zhen, Z., Kong, X., Huang, L., Yang, Z., Wang, X., Hao, X., Huang, T., Song, Y., & Liu, J. (2017). Quantifying the variability of scene-selective regions: Interindividual, interhemispheric, and sex differences. Human Brain Mapping, 38(4), 2260-2275. doi:10.1002/hbm.23519.

    Abstract

    Scene-selective regions (SSRs), including the parahippocampal place area (PPA), retrosplenial cortex (RSC), and transverse occipital sulcus (TOS), are among the most widely characterized functional regions in the human brain. However, previous studies have mostly focused on the commonality within each SSR, providing little information on different aspects of their variability. In a large group of healthy adults (N = 202), we used functional magnetic resonance imaging to investigate different aspects of topographical and functional variability within SSRs, including interindividual, interhemispheric, and sex differences. First, the PPA, RSC, and TOS were delineated manually for each individual. We then demonstrated that SSRs showed substantial interindividual variability in both spatial topography and functional selectivity. We further identified consistent interhemispheric differences in the spatial topography of all three SSRs, but distinct interhemispheric differences in scene selectivity. Moreover, we found that all three SSRs showed stronger scene selectivity in men than in women. In summary, our work thoroughly characterized the interindividual, interhemispheric, and sex variability of the SSRs and invites future work on the origin and functional significance of these variabilities. Additionally, we constructed the first probabilistic atlases for the SSRs, which provide the detailed anatomical reference for further investigations of the scene network.
  • Ziegler, A., DeStefano, A. L., König, I. R., Bardel, C., Brinza, D., Bull, S., Cai, Z., Glaser, B., Jiang, W., Lee, K. E., Li, C. X., Li, J., Li, X., Majoram, P., Meng, Y., Nicodemus, K. K., Platt, A., Schwarz, D. F., Shi, W., Shugart, Y. Y. and 7 moreZiegler, A., DeStefano, A. L., König, I. R., Bardel, C., Brinza, D., Bull, S., Cai, Z., Glaser, B., Jiang, W., Lee, K. E., Li, C. X., Li, J., Li, X., Majoram, P., Meng, Y., Nicodemus, K. K., Platt, A., Schwarz, D. F., Shi, W., Shugart, Y. Y., Stassen, H. H., Sun, Y. V., Won, S., Wang, W., Wahba, G., Zagaar, U. A., & Zhao, Z. (2007). Data mining, neural nets, trees–problems 2 and 3 of Genetic Analysis Workshop 15. Genetic Epidemiology, 31(Suppl 1), S51-S60. doi:10.1002/gepi.20280.

    Abstract

    Genome-wide association studies using thousands to hundreds of thousands of single nucleotide polymorphism (SNP) markers and region-wide association studies using a dense panel of SNPs are already in use to identify disease susceptibility genes and to predict disease risk in individuals. Because these tasks become increasingly important, three different data sets were provided for the Genetic Analysis Workshop 15, thus allowing examination of various novel and existing data mining methods for both classification and identification of disease susceptibility genes, gene by gene or gene by environment interaction. The approach most often applied in this presentation group was random forests because of its simplicity, elegance, and robustness. It was used for prediction and for screening for interesting SNPs in a first step. The logistic tree with unbiased selection approach appeared to be an interesting alternative to efficiently select interesting SNPs. Machine learning, specifically ensemble methods, might be useful as pre-screening tools for large-scale association studies because they can be less prone to overfitting, can be less computer processor time intensive, can easily include pair-wise and higher-order interactions compared with standard statistical approaches and can also have a high capability for classification. However, improved implementations that are able to deal with hundreds of thousands of SNPs at a time are required.
  • De Zubicaray, G., & Fisher, S. E. (Eds.). (2017). Genes, brain and language [Special Issue]. Brain and Language, 172.
  • De Zubicaray, G., & Fisher, S. E. (2017). Genes, Brain, and Language: A brief introduction to the Special Issue. Brain and Language, 172, 1-2. doi:10.1016/j.bandl.2017.08.003.
  • Zwitserlood, I. (2003). Classifying hand configurations in Nederlandse Gebarentaal (Sign Language of the Netherlands). PhD Thesis, LOT, Utrecht. Retrieved from http://igitur-archive.library.uu.nl/dissertations/2003-0717-122837/UUindex.html.

    Abstract

    This study investigates the morphological and morphosyntactic characteristics of hand configurations in signs, particularly in Nederlandse Gebarentaal (NGT). The literature on sign languages in general acknowledges that hand configurations can function as morphemes, more specifically as classifiers , in a subset of signs: verbs expressing the motion, location, and existence of referents (VELMs). These verbs are considered the output of productive sign formation processes. In contrast, other signs in which similar hand configurations appear ( iconic or motivated signs) have been considered to be lexicalized signs, not involving productive processes. This research report shows that meaningful hand configurations have (at least) two very different functions in the grammar of NGT (and presumably in other sign languages, too). First, they are agreement markers on VELMs, and hence are functional elements. Second, they are roots in motivated signs, and thus lexical elements. The latter signs are analysed as root compounds and are formed from various roots by productive processes. The similarities in surface form and differences in morphosyntactic characteristics observed in comparison of VELMs and root compounds are attributed to their different structures and to the sign language interface between grammar and phonetic form
  • Zwitserlood, I. (2011). Gebruiksgemak van het eerste Nederlandse Gebarentaal woordenboek kan beter [Book review]. Levende Talen Magazine, 4, 46-47.

    Abstract

    Review: User friendliness of the first dictionary of Sign Language of the Netherlands can be improved
  • Zwitserlood, I. (2011). Gevraagd: medewerkers verzorgingshuis met een goede oog-handcoördinatie. Het meten van NGT-vaardigheid. Levende Talen Magazine, 1, 44-46.

    Abstract

    (Needed: staff for residential care home with good eye-hand coordination. Measuring NGT-skills.)
  • Zwitserlood, I. (2011). Het Corpus NGT en de dagelijkse lespraktijk. Levende Talen Magazine, 6, 46.

    Abstract

    (The Corpus NGT and the daily practice of language teaching)
  • Zwitserlood, I. (2011). Het Corpus NGT en de opleiding leraar/tolk NGT. Levende Talen Magazine, 1, 40-41.

    Abstract

    (The Corpus NGT and teacher NGT/interpreter NGT training)
  • Zwitserlood, I. (2003). Word formation below and above little x: Evidence from Sign Language of the Netherlands. In Proceedings of SCL 19. Nordlyd Tromsø University Working Papers on Language and Linguistics (pp. 488-502).

    Abstract

    Although in many respects sign languages have a similar structure to that of spoken languages, the different modalities in which both types of languages are expressed cause differences in structure as well. One of the most striking differences between spoken and sign languages is the influence of the interface between grammar and PF on the surface form of utterances. Spoken language words and phrases are in general characterized by sequential strings of sounds, morphemes and words, while in sign languages we find that many phonemes, morphemes, and even words are expressed simultaneously. A linguistic model should be able to account for the structures that occur in both spoken and sign languages. In this paper, I will discuss the morphological/ morphosyntactic structure of signs in Nederlandse Gebarentaal (Sign Language of the Netherlands, henceforth NGT), with special focus on the components ‘place of articulation’ and ‘handshape’. I will focus on their multiple functions in the grammar of NGT and argue that the framework of Distributed Morphology (DM), which accounts for word formation in spoken languages, is also suited to account for the formation of structures in sign languages. First I will introduce the phonological and morphological structure of NGT signs. Then, I will briefly outline the major characteristics of the DM framework. Finally, I will account for signs that have the same surface form but have a different morphological structure by means of that framework.

Share this page