Publications

Displaying 1101 - 1197 of 1197
  • Van Alphen, P. M., & Van Berkum, J. J. A. (2012). Semantic involvement of initial and final lexical embeddings during sense-making: The advantage of starting late. Frontiers in Psychology, 3, 190. doi:10.3389/fpsyg.2012.00190.

    Abstract

    During spoken language interpretation, listeners rapidly relate the meaning of each individual word to what has been said before. However, spoken words often contain spurious other words, like 'day' in 'daisy', or 'dean' in 'sardine'. Do listeners also relate the meaning of such unintended, spurious words to the prior context? We used ERPs to look for transient meaning-based N400 effects in sentences that were completely plausible at the level of words intended by the speaker, but contained an embedded word whose meaning clashed with the context. Although carrier words with an initial embedding ('day' in 'daisy') did not elicit an embedding-related N400 effect relative to matched control words without embedding, carrier words with a final embedding ('dean' in 'sardine') did elicit such an effect. Together with prior work from our lab and the results of a Shortlist B simulation, our findings suggest that listeners do semantically interpret embedded words, albeit not under all conditions. We explain the latter by assuming that the sense-making system adjusts its hypothesis for how to interpret the external input at every new syllable, in line with recent ideas of active sampling in perception.
  • Van Uytvanck, D., Stehouwer, H., & Lampen, L. (2012). Semantic metadata mapping in practice: The Virtual Language Observatory. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 1029-1034). European Language Resources Association (ELRA).

    Abstract

    In this paper we present the Virtual Language Observatory (VLO), a metadata-based portal for language resources. It is completely based on the Component Metadata (CMDI) and ISOcat standards. This approach allows for the use of heterogeneous metadata schemas while maintaining the semantic compatibility. We describe the metadata harvesting process, based on OAI-PMH, and the conversion from several formats (OLAC, IMDI and the CLARIN LRT inventory) to their CMDI counterpart profiles. Then we focus on some post-processing steps to polish the harvested records. Next, the ingestion of the CMDI files into the VLO facet browser is described. We also include an overview of the changes since the first version of the VLO, based on user feedback from the CLARIN community. Finally there is an overview of additional ideas and improvements for future versions of the VLO.
  • Van Ackeren, M. J., Casasanto, D., Bekkering, H., Hagoort, P., & Rueschemeyer, S.-A. (2012). Pragmatics in action: Indirect requests engage theory of mind areas and the cortical motor network. Journal of Cognitive Neuroscience, 24, 2237-2247. doi:10.1162/jocn_a_00274.

    Abstract

    Research from the past decade has shown that understanding the meaning of words and utterances (i.e., abstracted symbols) engages the same systems we used to perceive and interact with the physical world in a content-specific manner. For example, understanding the word “grasp” elicits activation in the cortical motor network, that is, part of the neural substrate involved in planned and executing a grasping action. In the embodied literature, cortical motor activation during language comprehension is thought to reflect motor simulation underlying conceptual knowledge [note that outside the embodied framework, other explanations for the link between action and language are offered, e.g., Mahon, B. Z., & Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grouding conceptual content. Journal of Physiology, 102, 59–70, 2008; Hagoort, P. On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9, 416–423, 2005]. Previous research has supported the view that the coupling between language and action is flexible, and reading an action-related word form is not sufficient for cortical motor activation [Van Dam, W. O., van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical–semantic representations. Human Brain Mapping, doi: 10.1002/hbm.21365, 2011]. The current study goes one step further by addressing the necessity of action-related word forms for motor activation during language comprehension. Subjects listened to indirect requests (IRs) for action during an fMRI session. IRs for action are speech acts in which access to an action concept is required, although it is not explicitly encoded in the language. For example, the utterance “It is hot here!” in a room with a window is likely to be interpreted as a request to open the window. However, the same utterance in a desert will be interpreted as a statement. The results indicate (1) that comprehension of IR sentences activates cortical motor areas reliably more than comprehension of sentences devoid of any implicit motor information. This is true despite the fact that IR sentences contain no lexical reference to action. (2) Comprehension of IR sentences also reliably activates substantial portions of the theory of mind network, known to be involved in making inferences about mental states of others. The implications of these findings for embodied theories of language are discussed.
  • Van de Ven, M., Ernestus, M., & Schreuder, R. (2012). Predicting acoustically reduced words in spontaneous speech: The role of semantic/syntactic and acoustic cues in context. Laboratory Phonology, 3, 455-481. doi:10.1515/lp-2012-0020.

    Abstract

    In spontaneous speech, words may be realised shorter than in formal speech (e.g., English yesterday may be pronounced like [jɛʃeɩ]). Previous research has shown that context is required to understand highly reduced pronunciation variants. We investigated the extent to which listeners can predict low predictability reduced words on the basis of the semantic/syntactic and acoustic cues in their context. In four experiments, participants were presented with either the preceding context or the preceding and following context of reduced words, and either heard these fragments of conversational speech, or read their orthographic transcriptions. Participants were asked to predict the missing reduced word on the basis of the context alone, choosing from four plausible options. Participants made use of acoustic cues in the context, although casual speech typically has a high speech rate, and acoustic cues are much more unclear than in careful speech. Moreover, they relied on semantic/syntactic cues. Whenever there was a conflict between acoustic and semantic/syntactic contextual cues, measured as the word's probability given the surrounding words, listeners relied more heavily on acoustic cues. Further, context appeared generally insufficient to predict the reduced words, underpinning the significance of the acoustic characteristics of the reduced words themselves.
  • Van Berkum, J. J. A. (2012). Zonder gevoel geen taal. Neerlandistiek.nl. Wetenschappelijk tijdschrift voor de Nederlandse taal- en letterkunde, 12(01).

    Abstract

    Geïllustreerde herpublicatie van de oratie, uitgesproken bij het aanvaarden van de leeropdracht Discourse, cognitie en communicatie op 30 september 2011 (Universiteit Utrecht). In tegenstelling tot de oorspronkelijke oratie-tekst bevat deze herpublicatie ook diverse illustraties en links. Daarnaast is er in twee aansluitende artikelen door vakgenoten op gereageerd (zie http://www.neerlandistiek.nl/12.01a/ en http://www.neerlandistiek.nl/12.01b/)
  • Van Leeuwen, E. J. C., Cronin, K. A., & Haun, D. B. M. (2017). Tool use for corpse cleaning in chimpanzees. Scientific Reports, 7: 44091. doi:10.1038/srep44091.

    Abstract

    ct For the first time, chimpanzees have been observed using tools to clean the corpse of a deceased group member. A female chimpanzee sat down at the dead body of a young male, selected a firm stem of grass, and started to intently remove debris from his teeth. This report contributes novel behaviour to the chimpanzee's ethogram, and highlights how crucial information for reconstructing the evolutionary origins of human mortuary practices may be missed by refraining from developing adequate observation techniques to capture non-human animals' death responses
  • Van Goch, M. M., Verhoeven, L., & McQueen, J. M. (2017). Trainability in lexical specificity mediates between short-term memory and both vocabulary and rhyme awareness. Learning and Individual Differences, 57, 163-169. doi:10.1016/j.lindif.2017.05.008.

    Abstract

    A major goal in the early years of elementary school is learning to read, a process in which children show substantial individual differences. To shed light on the underlying processes of early literacy, this study investigates the interrelations among four known precursors to literacy: phonological short-term memory, vocabulary size, rhyme awareness, and trainability in the phonological specificity of lexical representations, by means of structural equation modelling, in a group of 101 4-year-old children. Trainability in lexical specificity was assessed by teaching children pairs of new phonologically-similar words. Standardized tests of receptive vocabulary, short-term memory, and rhyme awareness were used. The best-fitting model showed that trainability in lexical specificity partially mediated between short-term memory and both vocabulary size and rhyme awareness. These results demonstrate that individual differences in the ability to learn phonologically-similar new words are related to individual differences in vocabulary size and rhyme awareness.
  • Vanlangendonck, F. (2017). Finding common ground: On the neural mechanisms of communicative language production. PhD Thesis, Radboud University, Nijmegen.
  • Varma, S., Takashima, A., Krewinkel, S., Van Kooten, M., Fu, L., Medendorp, W. P., Kessels, R. P. C., & Daselaar, S. M. (2017). Non-interfering effects of active post-encoding tasks on episodic memory consolidation in humans. Frontiers in Behavioral Neuroscience, 11: 54. doi:10.3389/fnbeh.2017.00054.

    Abstract

    So far, studies that investigated interference effects of post-learning processes on episodic memory consolidation in humans have used tasks involving only complex and meaningful information. Such tasks require reallocation of general or encoding-specific resources away from consolidation-relevant activities. The possibility that interference can be elicited using a task that heavily taxes our limited brain resources, but has low semantic and hippocampal related long-term memory processing demands, has never been tested. We address this question by investigating whether consolidation could persist in parallel with an active, encoding-irrelevant, minimally semantic task, regardless of its high resource demands for cognitive processing. We distinguish the impact of such a task on consolidation based on whether it engages resources that are: (1) general/executive, or (2) specific/overlapping with the encoding modality. Our experiments compared subsequent memory performance across two post-encoding consolidation periods: quiet wakeful rest and a cognitively demanding n-Back task. Across six different experiments (total N = 176), we carefully manipulated the design of the n-Back task to target general or specific resources engaged in the ongoing consolidation process. In contrast to previous studies that employed interference tasks involving conceptual stimuli and complex processing demands, we did not find any differences between n-Back and rest conditions on memory performance at delayed test, using both recall and recognition tests. Our results indicate that: (1) quiet, wakeful rest is not a necessary prerequisite for episodic memory consolidation; and (2) post-encoding cognitive engagement does not interfere with memory consolidation when task-performance has minimal semantic and hippocampally-based episodic memory processing demands. We discuss our findings with reference to resource and reactivation-led interference theories
  • Verdonschot, R. G., Middelburg, R., Lensink, S. E., & Schiller, N. O. (2012). Morphological priming survives a language switch. Cognition, 124(3), 343-349. doi:10.1016/j.cognition.2012.05.019.

    Abstract

    In a long-lag morphological priming experiment, Dutch (L1)-English (L2) bilinguals were asked to name pictures and read aloud words. A design using non-switch blocks, consisting solely of Dutch stimuli, and switch-blocks, consisting of Dutch primes and targets with intervening English trials, was administered. Target picture naming was facilitated by morphologically related primes in both non-switch and switch blocks with equal magnitude. These results contrast some assumptions of sustained reactive inhibition models. However, models that do not assume bilinguals having to reactively suppress all activation of the non-target language can account for these data. (C) 2012 Elsevier B.V. All rights reserved.
  • Verga, L., & Kotz, S. A. (2017). Help me if I can't: Social interaction effects in adult contextual word learning. Cognition, 168, 76-90. doi:10.1016/j.cognition.2017.06.018.

    Abstract

    A major challenge in second language acquisition is to build up new vocabulary. How is it possible to identify the meaning of a new word among several possible referents? Adult learners typically use contextual information, which reduces the number of possible referents a new word can have. Alternatively, a social partner may facilitate word learning by directing the learner’s attention toward the correct new word meaning. While much is known about the role of this form of ‘joint attention’ in first language acquisition, little is known about its efficacy in second language acquisition. Consequently, we introduce and validate a novel visual word learning game to evaluate how joint attention affects the contextual learning of new words in a second language. Adult learners either acquired new words in a constant or variable sentence context by playing the game with a knowledgeable partner, or by playing the game alone on a computer. Results clearly show that participants who learned new words in social interaction (i) are faster in identifying a correct new word referent in variable sentence contexts, and (ii) temporally coordinate their behavior with a social partner. Testing the learned words in a post-learning recall or recognition task showed that participants, who learned interactively, better recognized words originally learned in a variable context. While this result may suggest that interactive learning facilitates the allocation of attention to a target referent, the differences in the performance during recognition and recall call for further studies investigating the effect of social interaction on learning performance. In summary, we provide first evidence on the role joint attention in second language learning. Furthermore, the new interactive learning game offers itself to further testing in complex neuroimaging research, where the lack of appropriate experimental set-ups has so far limited the investigation of the neural basis of adult word learning in social interaction.
  • Verhoeven, L., Baayen, R. H., & Schreuder, R. (2004). Orthographic constraints and frequency effects in complex word identification. Written Language and Literacy, 7(1), 49-59.

    Abstract

    In an experimental study we explored the role of word frequency and orthographic constraints in the reading of Dutch bisyllabic words. Although Dutch orthography is highly regular, several deviations from a one-to-one correspondence occur. In polysyllabic words, the grapheme E may represent three different vowels: /ε /, /e/, or /œ /. In the experiment, skilled adult readers were presented lists of bisyllabic words containing the vowel E in the initial syllable and the same grapheme or another vowel in the second syllable. We expected word frequency to be related to word latency scores. On the basis of general word frequency data, we also expected the interpretation of the initial syllable as a stressed /e/ to be facilitated as compared to the interpretation of an unstressed /œ /. We found a strong negative correlation between word frequency and latency scores. Moreover, for words with E in either syllable we found a preference for a stressed /e/ interpretation, indicating a lexical frequency effect. The results are discussed with reference to a parallel dual-route model of word decoding.
  • Vernes, S. C. (2017). What bats have to say about speech and language. Psychonomic Bulletin & Review, 24(1), 111-117. doi:10.3758/s13423-016-1060-3.

    Abstract

    Understanding the biological foundations of language is vital to gaining insight into how the capacity for language may have evolved in humans. Animal models can be exploited to learn about the biological underpinnings of shared human traits, and although no other animals display speech or language, a range of behaviors found throughout the animal kingdom are relevant to speech and spoken language. To date, such investigations have been dominated by studies of our closest primate relatives searching for shared traits, or more distantly related species that are sophisticated vocal communicators, like songbirds. Herein I make the case for turning our attention to the Chiropterans, to shed new light on the biological encoding and evolution of human language-relevant traits. Bats employ complex vocalizations to facilitate navigation as well as social interactions, and are exquisitely tuned to acoustic information. Furthermore, bats display behaviors such as vocal learning and vocal turn-taking that are directly pertinent for human spoken language. Emerging technologies are now allowing the study of bat vocal communication, from the behavioral to the neurobiological and molecular level. Although it is clear that no single animal model can reflect the complexity of human language, by comparing such findings across diverse species we can identify the shared biological mechanisms likely to have influenced the evolution of human language. Keywords
  • Viebahn, M. C., Ernestus, M., & McQueen, J. M. (2012). Co-occurrence of reduced word forms in natural speech. In Proceedings of INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association (pp. 2019-2022).

    Abstract

    This paper presents a corpus study that investigates the co-occurrence of reduced word forms in natural speech. We extracted Dutch past participles from three different speech registers and investigated the influence of several predictor variables on the presence and duration of schwas in prefixes and /t/s in suffixes. Our results suggest that reduced word forms tend to co-occur even if we partial out the effect of speech rate. The implications of our findings for episodic and abstractionist models of lexical representation are discussed.
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2017). Speaking style influences the brain’s electrophysiological response to grammatical errors in speech comprehension. Journal of Cognitive Neuroscience, 29(7), 1132-1146. doi:10.1162/jocn_a_01095.

    Abstract

    This electrophysiological study asked whether the brain processes grammatical gender
    violations in casual speech differently than in careful speech. Native speakers of Dutch were
    presented with utterances that contained adjective-noun pairs in which the adjective was either
    correctly inflected with a word-final schwa (e.g. een spannende roman “a suspenseful novel”) or
    incorrectly uninflected without that schwa (een spannend roman). Consistent with previous
    findings, the uninflected adjectives elicited an electrical brain response sensitive to syntactic
    violations when the talker was speaking in a careful manner. When the talker was speaking in a
    casual manner, this response was absent. A control condition showed electrophysiological responses
    for carefully as well as casually produced utterances with semantic anomalies, showing that
    listeners were able to understand the content of both types of utterance. The results suggest that
    listeners take information about the speaking style of a talker into account when processing the
    acoustic-phonetic information provided by the speech signal. Absent schwas in casual speech are
    effectively not grammatical gender violations. These changes in syntactic processing are evidence
    of contextually-driven neural flexibility.

    Files private

    Request files
  • Vigliocco, G., Vinson, D. P., Indefrey, P., Levelt, W. J. M., & Hellwig, F. M. (2004). Role of grammatical gender and semantics in German word production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 483-497. doi:10.1037/0278-7393.30.2.483.

    Abstract

    Semantic substitution errors (e.g., saying "arm" when "leg" is intended) are among the most common types of errors occurring during spontaneous speech. It has been shown that grammatical gender of German target nouns is preserved in the errors (E. Marx, 1999). In 3 experiments, the authors explored different accounts of the grammatical gender preservation effect in German. In all experiments, semantic substitution errors were induced using a continuous naming paradigm. In Experiment 1, it was found that gender preservation disappeared when speakers produced bare nouns. Gender preservation was found when speakers produced phrases with determiners marked for gender (Experiment 2) but not when the produced determiners were not marked for gender (Experiment 3). These results are discussed in the context of models of lexical retrieval during production.
  • Voermans, N. C., Petersson, K. M., Daudey, L., Weber, B., Van Spaendonck, K. P., Kremer, H. P. H., & Fernández, G. (2004). Interaction between the Human Hippocampus and the Caudate Nucleus during Route Recognition. Neuron, 43, 427-435. doi:10.1016/j.neuron.2004.07.009.

    Abstract

    Navigation through familiar environments can rely upon distinct neural representations that are related to different memory systems with either the hippo-campus or the caudate nucleus at their core. However,it is a fundamental question whether and how these systems interact during route recognition. To address this issue, we combined a functional neuroimaging approach with a naturally occurring, well-controlled humanmodel of caudate nucleus dysfunction (i.e., pre-clinical and early-stage Huntington’s disease). Our results reveal a noncompetitive interaction so that the hippocampus compensates for gradual caudate nucleus dysfunction with a gradual activity increase,maintaining normal behavior. Furthermore, we revealed an interaction between medial temporal and caudate activity in healthy subjects, which was adaptively modified in Huntington patients to allow compensatory hippocampal processing. Thus, the two memory systems contribute in a noncompetitive, co operative manner to route recognition, which enables Polthe hippocampus to compensate seamlessly for the functional degradation of the caudate nucleus
  • Vogels, J., & Van Bergen, G. (2017). Where to place inaccessible subjects in Dutch: The role of definiteness and animacy. Corpus linguistics and linguistic theory, 13(2), 369-398. doi:10.1515/cllt-2013-0021.

    Abstract

    Cross-linguistically, both subjects and topical information tend to be placed at the beginning of a sentence. Subjects are generally highly topical, causing both tendencies to converge on the same word order. However, subjects that lack prototypical topic properties may give rise to an incongruence between the preference to start a sentence with the subject and the preference to start a sentence with the most accessible information. We present a corpus study in which we investigate in what syntactic position (preverbal or postverbal) such low-accessible subjects are typically found in Dutch natural language. We examine the effects of both discourse accessibility (definiteness) and inherent accessibility (animacy). Our results show that definiteness and animacy interact in determining subject position in Dutch. Non-referential (bare) subjects are less likely to occur in preverbal position than definite subjects, and this tendency is reinforced when the subject is inanimate. This suggests that these two properties that make the subject less accessible together can ‘gang up’ against the subject first preference. The results support a probabilistic multifactorial account of syntactic variation.
  • Volker-Touw, C. M., de Koning, H. D., Giltay, J., De Kovel, C. G. F., van Kempen, T. S., Oberndorff, K., Boes, M., van Steensel, M. A., van Well, G. T., Blokx, W. A., Schalkwijk, J., Simon, A., Frenkel, J., & van Gijn, M. E. (2017). Erythematous nodes, urticarial rash and arthralgias in a large pedigree with NLRC4-related autoinflammatory disease, expansion of the phenotype. British Journal of Dermatology, 176(1), 244-248. doi:10.1111/bjd.14757.

    Abstract

    Autoinflammatory disorders (AID) are a heterogeneous group of diseases, characterized by an unprovoked innate immune response, resulting in recurrent or ongoing systemic inflammation and fever1-3. Inflammasomes are protein complexes with an essential role in pyroptosis and the caspase-1-mediated activation of the proinflammatory cytokines IL-1β, IL-17 and IL-18.
  • Von Stutterheim, C., & Klein, W. (2004). Die Gesetze des Geistes sind metrisch: Hölderlin und die Sprachproduktion. In H. Schwarz (Ed.), Fenster zur Welt: Deutsch als Fremdsprachenphilologie (pp. 439-460). München: Iudicium.
  • von Stutterheim, C., Andermann, M., Carroll, M., Flecken, M., & Schmiedtova, B. (2012). How grammaticized concepts shape event conceptualization in language production: Insights from linguistic analysis, eye tracking data, and memory performance. Linguistics, 50(4), 833-867. doi:10.1515/ling-2012-0026.

    Abstract

    The role of grammatical systems in profiling particular conceptual categories is used as a key in exploring questions concerning language specificity during the conceptualization phase in language production. This study focuses on the extent to which crosslinguistic differences in the concepts profiled by grammatical means in the domain of temporality (grammatical aspect) affect event conceptualization and distribution of attention when talking about motion events. The analyses, which cover native speakers of Standard Arabic, Czech, Dutch, English, German, Russian and Spanish, not only involve linguistic evidence, but also data from an eye tracking experiment and a memory test. The findings show that direction of attention to particular parts of motion events varies to some extent with the existence of grammaticized means to express imperfective/progressive aspect. Speakers of languages that do not have grammaticized aspect of this type are more likely to take a holistic view when talking about motion events and attend to as well as refer to endpoints of motion events, in contrast to speakers of aspect languages.

    Files private

    Request files
  • Von Stutterheim, C., & Klein, W. (1989). Referential movement in descriptive and narrative discourse. In R. Dietrich, & C. F. Graumann (Eds.), Language processing in social context (pp. 39-76). Amsterdam: Elsevier.
  • De Vos, C. (2012). Sign-spatiality in Kata Kolok: How a village sign language in Bali inscribes its signing space. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    In a small village in the north of Bali called Bengkala, relatively many people inherit deafness. The Balinese therefore refer to this village as Desa Kolok, which means 'deaf village'. Connie de Vos studied Kata Kolok, the sign language of this village, and the ways in which the language recruits space to talk about both spatial and non-spatial matters. he small village community Bengkala in the north of Bali has almost 3,000 inhabitants. Of all the inhabitants, 57% use sign language, with varying degrees of fluency. But of this signing community (between 1,200 and 1,800 signers, depending on your definition of 'signer'), only 4% are deaf. So, not only do the deaf people of Bengkala use the sign language Kata Kolok, but also the majority of the hearing population.
    "I've worked with deaf people from all over Asia, Europe, and also some signers in America," says Connie de Vos of MPI's Language and Cognition Department, and Centre for Language Studies (RU). "What sets apart this particular deaf village is that deaf individuals are highly integrated within the village clans. There is really a huge proportion of hearing signers." The sign language currently functions in all major aspects of village life and has been acquired from birth by multiple generations of deaf, native signers. According to De Vos, Kata Kolok is a fully-fledged sign language in every sense of the word. As a collaborative project, she has initiated inclusive deaf education within the village and now Kata Kolok is used as the primary language of instruction. De Vos' primary finding is that Kata Kolok discourse uses a different system of referring to space than other sign languages. Spatial relations are represented by a so-called "absolute frame of reference", based on geographic locations and wind directions. "All sign languages, as we know, use relative constructions for spatial relations. They use signs comparable to words like 'left' and 'right' instead of 'east' and 'west'. Kata Kolok does the latter. Kata Kolok signers appear to have an internal compass to continually register their position in space."De Vos is the first sign linguist who has documented Kata Kolok extensively. She spent more than a year in the village and collected over a hundred hours of video material of spontaneous conversations. "One of the things I've noticed is that language doesn't really emerge out of nothing," she says. "Signers adopt a local gesture system and transform it into a new and much more systematic sign language. A lot of the signs refer to concepts they're familiar with. That's why hearing signers have no difficulties in picking up Kata Kolok. Kata Kolok unites the hearing and the deaf.

    Additional information

    full text via Radboud Repository
  • De Vos, C., & Palfreyman, N. (2012). [Review of the book Deaf around the World: The impact of language / ed. by Mathur & Napoli]. Journal of Linguistics, 48, 731 -735.

    Abstract

    First paragraph. Since its advent half a century ago, the field of sign language linguistics has had close ties to education and the empowerment of deaf communities, a union that is fittingly celebrated by Deaf around the world: The impact of language. With this fruitful relationship in mind, sign language researchers and deaf educators gathered in Philadelphia in 2008, and in the volume under review, Gaurav Mathur & Donna Jo Napoli (henceforth M&N) present a selection of papers from this conference, organised in two parts: ‘Sign languages: Creation, context, form’, and ‘Social issues/civil rights ’. Each of the chapters is accompanied by a response chapter on the same or a related topic. The first part of the volume focuses on the linguistics of sign languages and includes papers on the impact of language modality on morphosyntax, second language acquisition, and grammaticalisation, highlighting the fine balance that sign linguists need to strike when conducting methodologically sound research. The second part of the book includes accounts by deaf activists from countries including China, India, Japan, Kenya, South Africa and Sweden who are considered prominent figures in areas such as deaf education, politics, culture and international development.
  • De Vos, C., & Zeshan, U. (2012). Introduction: Demographic, sociocultural, and linguistic variation across rural signing communities. In U. Zeshan, & C. de Vos (Eds.), Sign languages in village communities: Anthropological and linguistic insights (pp. 2-23). Berlin: Mouton De Gruyter.
  • De Vos, C. (2012). Kata Kolok: An updated sociolinguistic profile. In U. Zeshan (Ed.), Sign languages in village communities: Anthropological and linguistic insights (pp. 381-386). Berlin: Mouton de Gruyter.
  • De Vos, C. (2004). Over de biologische functie van taal: Pinker vs. Chomsky. Honours Review, 2(1), 20-25.

    Abstract

    Hoe is de complexe taal van de mens ontstaan? Geleidelijk door natuurlijke selectie, omdat groeiende grammaticale vermogens voor de mens een evolutionair voordeel opleverden? Of plotseling, als onbedoeld bijproduct of neveneffect van een genetische mutatie, zonder dat er sprake is van een adaptief proces? In dit artikel zet ik de argumenten van Pinker en Bloom voor de eerste stelling tegenover argumenten van Chomsky en Gould voor de tweede stelling. Vervolgens laat ik zien dat deze twee extreme standpunten ruimte bieden voor andere opties, die nader onderzoek waard zijn. Zo kan genetisch onderzoek in de komende decennia informatie opleveren, die nuancering van beide standpunten noodzakelijk maakt.
  • De Vos, C. (2012). The Kata Kolok perfective in child signing: Coordination of manual and non-manual components. In U. Zeshan, & C. De Vos (Eds.), Sign languages in village communities: Anthropological and linguistic insights (pp. 127-152). Berlin: Mouton de Gruyter.
  • De Vries, M. H., Petersson, K. M., Geukes, S., Zwitserlood, P., & Christiansen, M. H. (2012). Processing multiple non-adjacent dependencies: Evidence from sequence learning. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 2065-2076. doi:10.1098/rstb.2011.0414.

    Abstract

    Processing non-adjacent dependencies is considered to be one of the hallmarks of human language. Assuming that sequence-learning tasks provide a useful way to tap natural-language-processing mechanisms, we cross-modally combined serial reaction time and artificial-grammar learning paradigms to investigate the processing of multiple nested (A1A2A3B3B2B1) and crossed dependencies (A1A2A3B1B2B3), containing either three or two dependencies. Both reaction times and prediction errors highlighted problems with processing the middle dependency in nested structures (A1A2A3B3_B1), reminiscent of the ‘missing-verb effect’ observed in English and French, but not with crossed structures (A1A2A3B1_B3). Prior linguistic experience did not play a major role: native speakers of German and Dutch—which permit nested and crossed dependencies, respectively—showed a similar pattern of results for sequences with three dependencies. As for sequences with two dependencies, reaction times and prediction errors were similar for both nested and crossed dependencies. The results suggest that constraints on the processing of multiple non-adjacent dependencies are determined by the specific ordering of the non-adjacent dependencies (i.e. nested or crossed), as well as the number of non-adjacent dependencies to be resolved (i.e. two or three). Furthermore, these constraints may not be specific to language but instead derive from limitations on structured sequence learning.
  • Wagensveld, B., Segers, E., Van Alphen, P. M., Hagoort, P., & Verhoeven, L. (2012). A neurocognitive perspective on rhyme awareness: The N450 rhyme effect. Brain Research, 1483, 63-70. doi:10.1016/j.brainres.2012.09.018.

    Abstract

    Rhyme processing is reflected in the electrophysiological signals of the brain as a negative deflection for non-rhyming as compared to rhyming stimuli around 450 ms after stimulus onset. Studies have shown that this N450 component is not solely sensitive to rhyme but also responds to other types of phonological overlap. In the present study, we examined whether the N450 component can be used to gain insight into the global similarity effect, indicating that rhyme judgment skills decrease when participants are presented with word pairs that share a phonological overlap but do not rhyme (e.g., bell–ball). We presented 20 adults with auditory rhyming, globally similar overlapping and unrelated word pairs. In addition to measuring behavioral responses by means of a yes/no button press, we also took EEG measures. The behavioral data showed a clear global similarity effect; participants judged overlapping pairs more slowly than unrelated pairs. However, the neural outcomes did not provide evidence that the N450 effect responds differentially to globally similar and unrelated word pairs, suggesting that globally similar and dissimilar non-rhyming pairs are processed in a similar fashion at the stage of early lexical access.
  • Wagensveld, B., Van Alphen, P. M., Segers, E., & Verhoeven, L. (2012). The nature of rhyme processing in preliterate children. British Journal of Educational Psychology, 82, 672-689. doi:10.1111/j.2044-8279.2011.02055.x.

    Abstract

    Background. Rhyme awareness is one of the earliest forms of phonological awareness to develop and is assessed in many developmental studies by means of a simple rhyme task. The influence of more demanding experimental paradigms on rhyme judgment performance is often neglected. Addressing this issue may also shed light on whether rhyme processing is more global or analytical in nature. Aims. The aim of the present study was to examine whether lexical status and global similarity relations influenced rhyme judgments in kindergarten children and if so, if there is an interaction between these two factors. Sample. Participants were 41 monolingual Dutch-speaking preliterate kindergartners (average age 6.0 years) who had not yet received any formal reading education. Method. To examine the effects of lexical status and phonological similarity processing, the kindergartners were asked to make rhyme judgements on (pseudo) word targets that rhymed, phonologically overlapped or were unrelated to (pseudo) word primes. Results. Both a lexicality effect (pseudo-words were more difficult than words) and a global similarity effect (globally similar non-rhyming items were more difficult to reject than unrelated items) were observed. In addition, whereas in words the global similarity effect was only present in accuracy outcomes, in pseudo-words it was also observed in the response latencies. Furthermore, a large global similarity effect in pseudo-words correlated with a low score on short-term memory skills and grapheme knowledge. Conclusions. Increasing task demands led to a more detailed assessment of rhyme processing skills. Current assessment paradigms should therefore be extended with more demanding conditions. In light of the views on rhyme processing, we propose that a combination of global and analytical strategies is used to make a correct rhyme judgment.
  • Walker, R. M., Hill, A. E., Newman, A. C., Hamilton, G., Torrance, H. S., Anderson, S. M., Ogawa, F., Derizioti, P., Nicod, J., Vernes, S. C., Fisher, S. E., Thomson, P. A., Porteous, D. J., & Evans, K. L. (2012). The DISC1 promoter: Characterization and regulation by FOXP2. Human Molecular Genetics, 21, 2862-2872. doi:10.1093/hmg/dds111.

    Abstract

    Disrupted in schizophrenia 1 (DISC1) is a leading candidate susceptibility gene for schizophrenia, bipolar disorder, and recurrent major depression, which has been implicated in other psychiatric illnesses of neurodevelopmental origin, including autism. DISC1 was initially identified at the breakpoint of a balanced chromosomal translocation, t(1;11) (q42.1;14.3), in a family with a high incidence of psychiatric illness. Carriers of the translocation show a 50% reduction in DISC1 protein levels, suggesting altered DISC1 expression as a pathogenic mechanism in psychiatric illness. Altered DISC1 expression in the post-mortem brains of individuals with psychiatric illness and the frequent implication of non-coding regions of the gene by association analysis further support this assertion. Here, we provide the first characterisation of the DISC1 promoter region. Using dual luciferase assays, we demonstrate that a region -300bp to -177bp relative to the transcription start site (TSS) contributes positively to DISC1 promoter activity, whilst a region -982bp to -301bp relative to the TSS confers a repressive effect. We further demonstrate inhibition of DISC1 promoter activity and protein expression by FOXP2, a transcription factor implicated in speech and language function. This inhibition is diminished by two distinct FOXP2 point mutations, R553H and R328X, which were previously found in families affected by developmental verbal dyspraxia (DVD). Our work identifies an intriguing mechanistic link between neurodevelopmental disorders that have traditionally been viewed as diagnostically distinct but which do share varying degrees of phenotypic overlap.
  • Waller, D., Loomis, J. M., & Haun, D. B. M. (2004). Body-based senses enhance knowledge of directions in large-scale environments. Psychonomic Bulletin & Review, 11(1), 157-163.

    Abstract

    Previous research has shown that inertial cues resulting from passive transport through a large environment do not necessarily facilitate acquiring knowledge about its layout. Here we examine whether the additional body-based cues that result from active movement facilitate the acquisition of spatial knowledge. Three groups of participants learned locations along an 840-m route. One group walked the route during learning, allowing access to body-based cues (i.e., vestibular, proprioceptive, and efferent information). Another group learned by sitting in the laboratory, watching videos made from the first group. A third group watched a specially made video that minimized potentially confusing head-on-trunk rotations of the viewpoint. All groups were tested on their knowledge of directions in the environment as well as on its configural properties. Having access to body-based information reduced pointing error by a small but significant amount. Regardless of the sensory information available during learning, participants exhibited strikingly common biases.
  • Wang, L., Jensen, O., Van den Brink, D., Weder, N., Schoffelen, J.-M., Magyari, L., Hagoort, P., & Bastiaansen, M. C. M. (2012). Beta oscillations relate to the N400m during language comprehension. Human Brain Mapping, 33, 2898-2912. doi:10.1002/hbm.21410.

    Abstract

    The relationship between the evoked responses (ERPs/ERFs) and the event-related changes in EEG/MEG power that can be observed during sentence-level language comprehension is as yet unclear. This study addresses a possible relationship between MEG power changes and the N400m component of the event-related field. Whole-head MEG was recorded while subjects listened to spoken sentences with incongruent (IC) or congruent (C) sentence endings. A clear N400m was observed over the left hemisphere, and was larger for the IC sentences than for the C sentences. A time–frequency analysis of power revealed a decrease in alpha and beta power over the left hemisphere in roughly the same time range as the N400m for the IC relative to the C condition. A linear regression analysis revealed a positive linear relationship between N400m and beta power for the IC condition, not for the C condition. No such linear relation was found between N400m and alpha power for either condition. The sources of the beta decrease were estimated in the LIFG, a region known to be involved in semantic unification operations. One source of the N400m was estimated in the left superior temporal region, which has been related to lexical retrieval. We interpret our data within a framework in which beta oscillations are inversely related to the engagement of task-relevant brain networks. The source reconstructions of the beta power suppression and the N400m effect support the notion of a dynamic communication between the LIFG and the left superior temporal region during language comprehension.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2012). Information structure influences depth of syntactic processing: Event-related potential evidence for the Chomsky illusion. PLoS One, 7(10), e47917. doi:10.1371/journal.pone.0047917.

    Abstract

    Information structure facilitates communication between interlocutors by highlighting relevant information. It has previously been shown that information structure modulates the depth of semantic processing. Here we used event-related potentials to investigate whether information structure can modulate the depth of syntactic processing. In question-answer pairs, subtle (number agreement) or salient (phrase structure) syntactic violations were placed either in focus or out of focus through information structure marking. P600 effects to these violations reflect the depth of syntactic processing. For subtle violations, a P600 effect was observed in the focus condition, but not in the non-focus condition. For salient violations, comparable P600 effects were found in both conditions. These results indicate that information structure can modulate the depth of syntactic processing, but that this effect depends on the salience of the information. When subtle violations are not in focus, they are processed less elaborately. We label this phenomenon the Chomsky illusion.
  • Wang, L., Zhu, Z., & Bastiaansen, M. C. M. (2012). Integration or predictability? A further specification of the functional role of gamma oscillations in language comprehension. Frontiers in Psychology, 3, 187. doi:10.3389/fpsyg.2012.00187.

    Abstract

    Gamma-band neuronal synchronization during sentence-level language comprehension has previously been linked with semantic unification. Here, we attempt to further narrow down the functional significance of gamma during language comprehension, by distinguishing between two aspects of semantic unification: successful integration of word meaning into the sentence context, and prediction of upcoming words. We computed event-related potentials (ERPs) and frequency band-specific electroencephalographic (EEG) power changes while participants read sentences that contained a critical word (CW) that was (1) both semantically congruent and predictable (high cloze, HC), (2) semantically congruent but unpredictable (low cloze, LC), or (3) semantically incongruent (and therefore also unpredictable; semantic violation, SV). The ERP analysis showed the expected parametric N400 modulation (HC < LC < SV). The time-frequency analysis showed qualitatively different results. In the gamma-frequency range, we observed a power increase in response to the CW in the HC condition, but not in the LC and the SV conditions. Additionally, in the theta frequency range we observed a power increase in the SV condition only. Our data provide evidence that gamma power increases are related to the predictability of an upcoming word based on the preceding sentence context, rather than to the integration of the incoming word’s semantics into the preceding context. Further, our theta band data are compatible with the notion that theta band synchronization in sentence comprehension might be related to the detection of an error in the language input.
  • Warner, N., Jongman, A., Sereno, J., & Kemps, R. J. J. K. (2004). Incomplete neutralization and other sub-phonemic durational differences in production and perception: Evidence from Dutch. Journal of Phonetics, 32(2), 251-276. doi:10.1016/S0095-4470(03)00032-9.

    Abstract

    Words which are expected to contain the same surface string of segments may, under identical prosodic circumstances, sometimes be realized with slight differences in duration. Some researchers have attributed such effects to differences in the words’ underlying forms (incomplete neutralization), while others have suggested orthographic influence and extremely careful speech as the cause. In this paper, we demonstrate such sub-phonemic durational differences in Dutch, a language which some past research has found not to have such effects. Past literature has also shown that listeners can often make use of incomplete neutralization to distinguish apparent homophones. We extend perceptual investigations of this topic, and show that listeners can perceive even durational differences which are not consistently observed in production. We further show that a difference which is primarily orthographic rather than underlying can also create such durational differences. We conclude that a wide variety of factors, in addition to underlying form, can induce speakers to produce slight durational differences which listeners can also use in perception.
  • Warner, N., & Cutler, A. (2017). Stress effects in vowel perception as a function of language-specific vocabulary patterns. Phonetica, 74, 81-106. doi:10.1159/000447428.

    Abstract

    Background/Aims: Evidence from spoken word recognition suggests that for English listeners, distinguishing full versus reduced vowels is important, but discerning stress differences involving the same full vowel (as in mu- from music or museum) is not. In Dutch, in contrast, the latter distinction is important. This difference arises from the relative frequency of unstressed full vowels in the two vocabularies. The goal of this paper is to determine how this difference in the lexicon influences the perception of stressed versus unstressed vowels. Methods: All possible sequences of two segments (diphones) in Dutch and in English were presented to native listeners in gated fragments. We recorded identification performance over time throughout the speech signal. The data were here analysed specifically for patterns in perception of stressed versus unstressed vowels. Results: The data reveal significantly larger stress effects (whereby unstressed vowels are harder to identify than stressed vowels) in English than in Dutch. Both language-specific and shared patterns appear regarding which vowels show stress effects. Conclusion: We explain the larger stress effect in English as reflecting the processing demands caused by the difference in use of unstressed vowels in the lexicon. The larger stress effect in English is due to relative inexperience with processing unstressed full vowels
  • Warner, N. L., McQueen, J. M., Liu, P. Z., Hoffmann, M., & Cutler, A. (2012). Timing of perception for all English diphones [Abstract]. Program abstracts from the 164th Meeting of the Acoustical Society of America published in the Journal of the Acoustical Society of America, 132(3), 1967.

    Abstract

    Information in speech does not unfold discretely over time; perceptual cues are gradient and overlapped. However, this varies greatly across segments and environments: listeners cannot identify the affricate in /ptS/ until the frication, but information about the vowel in /li/ begins early. Unlike most prior studies, which have concentrated on subsets of language sounds, this study tests perception of every English segment in every phonetic environment, sampling perceptual identification at six points in time (13,470 stimuli/listener; 20 listeners). Results show that information about consonants after another segment is most localized for affricates (almost entirely in the release), and most gradual for voiced stops. In comparison to stressed vowels, unstressed vowels have less information spreading to
    neighboring segments and are less well identified. Indeed, many vowels,
    especially lax ones, are poorly identified even by the end of the following segment. This may partly reflect listeners’ familiarity with English vowels’ dialectal variability. Diphthongs and diphthongal tense vowels show the most sudden improvement in identification, similar to affricates among the consonants, suggesting that information about segments defined by acoustic change is highly localized. This large dataset provides insights into speech perception and data for probabilistic modeling of spoken word recognition.
  • Wassenaar, M., Brown, C. M., & Hagoort, P. (2004). ERP-effects of subject-verb agreement violations in patients with Broca's aphasia. Journal of Cognitive Neuroscience, 16(4), 553-576. doi:10.1162/089892904323057290.

    Abstract

    This article presents electrophysiological data on on-line syntactic processing during auditory sentence comprehension in patients with Broca's aphasia. Event-related brain potentials (ERPs) were recorded from the scalp while subjects listened to sentences that were either syntactically correct or contained violations of subject-verb agreement. Three groups of subjects were tested: Broca patients (n = 10), nonaphasic patients with a right-hemisphere (RH) lesion (n = 5), and healthy agedmatched controls (n = 12). The healthy, control subjects showed a P600/SPS effect as response to the agreement violations. The nonaphasic patients with an RH lesion showed essentially the same pattern. The overall group of Broca patients did not show this sensitivity. However, the sensitivity was modulated by the severity of the syntactic comprehension impairment. The largest deviation from the standard P600/SPS effect was found in the patients with the relatively more severe syntactic comprehension impairment. In addition, ERPs to tones in a classical tone oddball paradigm were also recorded. Similar to the normal control subjects and RH patients, the group of Broca patients showed a P300 effect in the tone oddball condition. This indicates that aphasia in itself does not lead to a general reduction in all cognitive ERP effects. It was concluded that deviations from the standard P600/SPS effect in the Broca patients reflected difficulties with on-line maintaining of number information across clausal boundaries for establishing subject-verb agreement.
  • Weber, A., & Cutler, A. (2004). Lexical competition in non-native spoken-word recognition. Journal of Memory and Language, 50(1), 1-25. doi:10.1016/S0749-596X(03)00105-0.

    Abstract

    Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name (pencil, given target panda) than on less confusable distractors (beetle, given target bottle). English listeners showed no such viewing time difference. The confusability was asymmetric: given pencil as target, panda did not distract more than distinct competitors. Distractors with Dutch names phonologically related to English target names (deksel, ‘lid,’ given target desk) also received longer fixations than distractors with phonologically unrelated names. Again, English listeners showed no differential effect. With the materials translated into Dutch, Dutch listeners showed no activation of the English words (desk, given target deksel). The results motivate two conclusions: native phonemic categories capture second-language input even when stored representations maintain a second-language distinction; and lexical competition is greater for non-native than for native listeners.
  • Weber, A., & Scharenborg, O. (2012). Models of spoken-word recognition. Wiley Interdisciplinary Reviews: Cognitive Science, 3, 387-401. doi:10.1002/wcs.1178.

    Abstract

    All words of the languages we know are stored in the mental lexicon. Psycholinguistic models describe in which format lexical knowledge is stored and how it is accessed when needed for language use. The present article summarizes key findings in spoken-word recognition by humans and describes how models of spoken-word recognition account for them. Although current models of spoken-word recognition differ considerably in the details of implementation, there is general consensus among them on at least three aspects: multiple word candidates are activated in parallel as a word is being heard, activation of word candidates varies with the degree of match between the speech signal and stored lexical representations, and activated candidate words compete for recognition. No consensus has been reached on other aspects such as the flow of information between different processing levels, and the format of stored prelexical and lexical representations. WIREs Cogn Sci 2012
  • Weber, A. (1998). Listening to nonnative language which violates native assimilation rules. In D. Duez (Ed.), Proceedings of the European Scientific Communication Association workshop: Sound patterns of Spontaneous Speech (pp. 101-104).

    Abstract

    Recent studies using phoneme detection tasks have shown that spoken-language processing is neither facilitated nor interfered with by optional assimilation, but is inhibited by violation of obligatory assimilation. Interpretation of these results depends on an assessment of their generality, specifically, whether they also obtain when listeners are processing nonnative language. Two separate experiments are presented in which native listeners of German and native listeners of Dutch had to detect a target fricative in legal monosyllabic Dutch nonwords. All of the nonwords were correct realisations in standard Dutch. For German listeners, however, half of the nonwords contained phoneme strings which violate the German fricative assimilation rule. Whereas the Dutch listeners showed no significant effects, German listeners detected the target fricative faster when the German fricative assimilation was violated than when no violation occurred. The results might suggest that violation of assimilation rules does not have to make processing more difficult per se.
  • Weber, A., & Crocker, M. W. (2012). On the nature of semantic constraints on lexical access. Journal of Psycholinguistic Research, 41, 195-214. doi:10.1007/s10936-011-9184-0.

    Abstract

    We present two eye-tracking experiments that investigate lexical frequency and semantic context constraints in spoken-word recognition in German. In both experiments, the pivotal words were pairs of nouns overlapping at onset but varying in lexical frequency. In Experiment 1, German listeners showed an expected frequency bias towards high-frequency competitors (e.g., Blume, ‘flower’) when instructed to click on low-frequency targets (e.g., Bluse, ‘blouse’). In Experiment 2, semantically constraining context increased the availability of appropriate low-frequency target words prior to word onset, but did not influence the availability of semantically inappropriate high-frequency competitors at the same time. Immediately after target word onset, however, the activation of high-frequency competitors was reduced in semantically constraining sentences, but still exceeded that of unrelated distractor words significantly. The results suggest that (1) semantic context acts to downgrade activation of inappropriate competitors rather than to exclude them from competition, and (2) semantic context influences spoken-word recognition, over and above anticipation of upcoming referents.
  • Weber, A., & Broersma, M. (2012). Spoken word recognition in second language acquisition. In C. A. Chapelle (Ed.), The encyclopedia of applied linguistics. Bognor Regis: Wiley-Blackwell. doi:10.1002/9781405198431.wbeal1104.

    Abstract

    In order to decode the message of a speaker, listeners have to recognize individual words in the speaker's utterance.
  • Weber, A., & Paris, G. (2004). The origin of the linguistic gender effect in spoken-word recognition: Evidence from non-native listening. In K. Forbus, D. Gentner, & T. Tegier (Eds.), Proceedings of the 26th Annual Meeting of the Cognitive Science Society. Mahwah, NJ: Erlbaum.

    Abstract

    Two eye-tracking experiments examined linguistic gender effects in non-native spoken-word recognition. French participants, who knew German well, followed spoken instructions in German to click on pictures on a computer screen (e.g., Wo befindet sich die Perle, “where is the pearl”) while their eye movements were monitored. The name of the target picture was preceded by a gender-marked article in the instructions. When a target and a competitor picture (with phonologically similar names) were of the same gender in both German and French, French participants fixated competitor pictures more than unrelated pictures. However, when target and competitor were of the same gender in German but of different gender in French, early fixations to the competitor picture were reduced. Competitor activation in the non-native language was seemingly constrained by native gender information. German listeners showed no such viewing time difference. The results speak against a form-based account of the linguistic gender effect. They rather support the notion that the effect originates from the grammatical level of language processing.
  • Weber, K. (2012). The language learning brain: Evidence from second language learning and bilingual studies of syntactic processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    Many people speak a second language next to their mother tongue. How do they learn this language and how does the brain process it compared to the native language? A second language can be learned without explicit instruction. Our brains automatically pick up grammatical structures, such as word order, when these structures are repeated frequently during learning. The learning takes place within hours or days and the same brain areas, such as frontal and temporal brain regions, that process our native language are very quickly activated. When people master a second language very well, even the same neuronal populations in these language brain areas are involved. This is especially the case when the grammatical structures are similar. In conclusion, it appears that a second language builds on the existing cognitive and neural mechanisms of the native language as much as possible.
  • Weber, A., & Mueller, K. (2004). Word order variation in German main clauses: A corpus analysis. In Proceedings of the 20th International Conference on Computational Linguistics.

    Abstract

    In this paper, we present empirical data from a corpus study on the linear order of subjects and objects in German main clauses. The aim was to establish the validity of three well-known ordering constraints: given complements tend to occur before new complements, definite before indefinite, and pronoun before full noun phrase complements. Frequencies of occurrences were derived for subject-first and object-first sentences from the German Negra corpus. While all three constraints held on subject-first sentences, results for object-first sentences varied. Our findings suggest an influence of grammatical functions on the ordering of verb complements.
  • Wegman, J., Tyborowska, A., Hoogman, M., Vasquez, A. A., & Janzen, G. (2017). The brain-derived neurotrophic factor Val66Met polymorphism affects encoding of object locations during active navigation. European Journal of Neuroscience, 45(12), 1501-1511. doi:10.1111/ejn.13416.

    Abstract

    The brain-derived neurotrophic factor (BDNF) was shown to be involved in spatial memory and spatial strategy preference. A naturally occurring single nucleotide polymorphism of the BDNF gene (Val66Met) affects activity-dependent secretion of BDNF. The current event-related fMRI study on preselected groups of ‘Met’ carriers and homozygotes of the ‘Val’ allele investigated the role of this polymorphism on encoding and retrieval in a virtual navigation task in 37 healthy volunteers. In each trial, participants navigated toward a target object. During encoding, three positional cues (columns) with directional cues (shadows) were available. During retrieval, the invisible target had to be replaced while either two objects without shadows (objects trial) or one object with a shadow (shadow trial) were available. The experiment consisted of blocks, informing participants of which trial type would be most likely to occur during retrieval. We observed no differences between genetic groups in task performance or time to complete the navigation tasks. The imaging results show that Met carriers compared to Val homozygotes activate the left hippocampus more during successful object location memory encoding. The observed effects were independent of non-significant performance differences or volumetric differences in the hippocampus. These results indicate that variations of the BDNF gene affect memory encoding during spatial navigation, suggesting that lower levels of BDNF in the hippocampus results in less efficient spatial memory processing
  • Whitehouse, A. J., Bishop, D. V., Ang, Q., Pennell, C. E., & Fisher, S. E. (2012). Corrigendum to CNTNAP2 variants affect early language development in the general population. Genes, Brain and Behavior, 11, 501. doi:10.1111/j.1601-183X.2012.00806.x.

    Abstract

    Corrigendum to CNTNAP2 variants affect early language development in the general population A. J. O. Whitehouse, D. V. M. Bishop, Q. W. Ang, C. E. Pennell and S. E. Fisher Genes Brain Behav (2011) doi: 10.1111/j.1601-183X.2011.00684.x. The authors have detected a typographical error in the Abstract of this paper. The error is in the fifth sentence, which reads: ‘‘On the basis of these findings, we performed analyses of four-marker haplotypes of rs2710102–rs759178–rs17236239–rs2538976 and identified significant association (haplotype TTAA, P = 0.049; haplotype GCAG,P = .0014).’’ Rather than ‘‘GCAG’’, the final haplotype should read ‘‘CGAG’’. This typographical error was made in the Abstract only and this has no bearing on the results or conclusions of the study, which remain unchanged. Reference Whitehouse, A. J. O., Bishop, D. V. M., Ang, Q. W., Pennell, C. E. & Fisher, S. E. (2011) CNTNAP2 variants affect early language development in the general population. Genes Brain Behav 10, 451–456. doi: 10.1111/j.1601-183X.2011.00684.x.
  • Whitehouse, H., & Cohen, E. (2012). Seeking a rapprochement between anthropology and the cognitive sciences: A problem-driven approach. Topics in Cognitive Science, 4, 404-412. doi:10.1111/j.1756-8765.2012.01203.x.

    Abstract

    Beller, Bender, and Medin question the necessity of including social anthropology within the cognitive sciences. We argue that there is great scope for fruitful rapprochement while agreeing that there are obstacles (even if we might wish to debate some of those specifically identified by Beller and colleagues). We frame the general problem differently, however: not in terms of the problem of reconciling disciplines and research cultures, but rather in terms of the prospects for collaborative deployment of expertise (methodological and theoretical) in problem-driven research. For the purposes of illustration, our focus in this article is on the evolution of cooperation
  • Whorf, B. L. (2012). Language, thought, and reality: selected writings of Benjamin Lee Whorf [2nd ed.]: introduction by John B. Carroll; foreword by Stephen C. Levinson. (J. B. Carroll, S. C. Levinson, & P. Lee, Eds.). Cambridge, MA: MIT Press.

    Abstract

    The pioneering linguist Benjamin Whorf (1897–1941) grasped the relationship between human language and human thinking: how language can shape our innermost thoughts. His basic thesis is that our perception of the world and our ways of thinking about it are deeply influenced by the structure of the languages we speak. The writings collected in this volume include important papers on the Maya, Hopi, and Shawnee languages, as well as more general reflections on language and meaning. Whorf’s ideas about the relation of language and thought have always appealed to a wide audience, but their reception in expert circles has alternated between dismissal and applause. Recently the language sciences have headed in directions that give Whorf’s thinking a renewed relevance. Hence this new edition of Whorf’s classic work is especially timely. The second edition includes all the writings from the first edition as well as John Carroll’s original introduction, a new foreword by Stephen Levinson of the Max Planck Institute for Psycholinguistics that puts Whorf’s work in historical and contemporary context, and new indexes. In addition, this edition offers Whorf’s “Yale Report,” an important work from Whorf’s mature oeuvre.
  • Widlok, T. (2004). Ethnography in language Documentation. Language Archive Newsletter, 1(3), 4-6.
  • Wiese, R., Orzechowska, P., Alday, P. M., & Ulbrich, C. (2017). Structural Principles or Frequency of Use? An ERP Experiment on the Learnability of Consonant Clusters. Frontiers in Psychology, 7: 2005. doi:10.3389/fpsyg.2016.02005.

    Abstract

    Phonological knowledge of a language involves knowledge about which segments can be combined under what conditions. Languages vary in the quantity and quality of licensed combinations, in particular sequences of consonants, with Polish being a language with a large inventory of such combinations. The present paper reports on a two-session experiment in which Polish-speaking adult participants learned nonce words with final consonant clusters. The aim was to study the role of two factors which potentially play a role in the learning of phonotactic structures: the phonological principle of sonority (ordering sound segments within the syllable according to their inherent loudness) and the (non-) existence as a usage-based phenomenon. EEG responses in two different time windows (adversely to behavioral responses) show linguistic processing by native speakers of Polish to be sensitive to both distinctions, in spite of the fact that Polish is rich in sonority-violating clusters. In particular, a general learning effect in terms of an N400 effect was found which was demonstrated to be different for sonority-obeying clusters than for sonority-violating clusters. Furthermore, significant interactions of formedness and session, and of existence and session, demonstrate that both factors, the sonority principle and the frequency pattern, play a role in the learning process.
  • Willems, R. M., & Francken, J. C. (2012). Embodied cognition: Taking the next step. Frontiers in Psychology, 3, 582. doi:10.3389/fpsyg.2012.00582.

    Abstract

    Recent years have seen a large amount of empirical studies related to ‘embodied cognition’. While interesting and valuable, there is something dissatisfying with the current state of affairs in this research domain. Hypotheses tend to be underspecified, testing in general terms for embodied versus disembodied processing. The lack of specificity of current hypotheses can easily lead to an erosion of the embodiment concept, and result in a situation in which essentially any effect is taken as positive evidence. Such erosion is not helpful to the field and does not do justice to the importance of embodiment. Here we want to take stock, and formulate directions for how it can be studied in a more fruitful fashion. As an example we will describe few example studies that have investigated the role of sensori-motor systems in the coding of meaning (‘embodied semantics’). Instead of focusing on the dichotomy between embodied and disembodied theories, we suggest that the field move forward and ask how and when sensori-motor systems and behavior are involved in cognition.
  • Windhouwer, M., Broeder, D., & Van Uytvanck, D. (2012). A CMD core model for CLARIN web services. In Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 41-48).

    Abstract

    In the CLARIN infrastructure various national projects have started initiatives to allow users of the infrastructure to create chains or workflows of web services. The Component Metadata (CMD) core model for web services described in this paper tries to align the metadata descriptions of these various initiatives. This should allow chaining/workflow engines to find matching and invoke services. The paper describes the landscape of web services architectures and the state of the national initiatives. Based on this a CMD core model for CLARIN is proposed, which, within some limits, can be adapted to the specific needs of an initiative by the standard facilities of CMD. The paper closes with the current state and usage of the model and a look into the future.
  • Windhouwer, M., & Wright, S. E. (2012). Linking to linguistic data categories in ISOcat. In C. Chiarcos, S. Nordhoff, & S. Hellmann (Eds.), Linked data in linguistics: Representing and connecting language data and language metadata (pp. 99-107). Berlin: Springer.

    Abstract

    ISO Technical Committee 37, Terminology and other language and content resources, established an ISO 12620:2009 based Data Category Registry (DCR), called ISOcat (see http://www.isocat.org), to foster semantic interoperability of linguistic resources. However, this goal can only be met if the data categories are reused by a wide variety of linguistic resource types. A resource indicates its usage of data categories by linking to them. The small DC Reference XML vocabulary is used to embed links to data categories in XML documents. The link is established by an URI, which servers as the Persistent IDentifier (PID) of a data category. This paper discusses the efforts to mimic the same approach for RDF-based resources. It also introduces the RDF quad store based Relation Registry RELcat, which enables ontological relationships between data categories not supported by ISOcat and thus adds an extra level of linguistic knowledge.
  • Windhouwer, M. (2012). RELcat: a Relation Registry for ISOcat data categories. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 3661-3664). European Language Resources Association (ELRA).

    Abstract

    The ISOcat Data Category Registry contains basically a flat and easily extensible list of data category specifications. To foster reuse and standardization only very shallow relationships among data categories are stored in the registry. However, to assist crosswalks, possibly based on personal views, between various (application) domains and to overcome possible proliferation of data categories more types of ontological relationships need to be specified. RELcat is a first prototype of a Relation Registry, which allows storing arbitrary relationships. These relationships can reflect the personal view of one linguist or a larger community. The basis of the registry is a relation type taxonomy that can easily be extended. This allows on one hand to load existing sets of relations specified in, for example, an OWL (2) ontology or SKOS taxonomy. And on the other hand allows algorithms that query the registry to traverse the stored semantic network to remain ignorant of the original source vocabulary. This paper describes first experiences with RELcat and explains some initial design decisions.
  • Windhouwer, M. (2012). Towards standardized descriptions of linguistic features: ISOcat and procedures for using common data categories. In J. Jancsary (Ed.), Proceedings of the Conference on Natural Language Processing 2012, (SFLR 2012 workshop), September 19-21, 2012, Vienna (pp. 494). Vienna: Österreichischen Gesellschaft für Artificial Intelligende (ÖGAI).

    Abstract

    Automatic Language Identification of written texts is a well-established area of research in Computational Linguistics. State-of-the-art algorithms often rely on n-gram character models to identify the correct language of texts, with good results seen for European languages. In this paper we propose the use of a character n-gram model and a word n-gram language model for the automatic classification of two written varieties of Portuguese: European and Brazilian. Results reached 0.998 for accuracy using character 4-grams.
  • Withers, P. (2012). Metadata management with Arbil. In V. Arranz, D. Broeder, B. Gaiffe, M. Gavrilidou, & M. Monachini (Eds.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 72-75). European Language Resources Association (ELRA).

    Abstract

    Arbil is an application designed to create and manage metadata for research data and to arrange this data into a structure appropriate for archiving. The metadata is displayed in tables, which allows an overview of the metadata and the ability to populate and update many metadata sections in bulk. Both IMDI and Clarin metadata formats are supported and Arbil has been designed as a local application so that it can also be used offline, for instance in remote field sites. The metadata can be entered in any order or at any stage that the user is able; once the metadata and its data are ready for archiving and an Internet connection is available it can be exported from Arbil and in the case of IMDI it can then be transferred to the main archive via LAMUS (archive management and upload system).
  • Wittek, A. (1998). Learning verb meaning via adverbial modification: Change-of-state verbs in German and the adverb "wieder" again. In A. Greenhill, M. Hughes, H. Littlefield, & H. Walsh (Eds.), Proceedings of the 22nd Annual Boston University Conference on Language Development (pp. 779-790). Somerville, MA: Cascadilla Press.
  • Wittenburg, P., Skiba, R., & Trilsbeek, P. (2004). Technology and Tools for Language Documentation. Language Archive Newsletter, 1(4), 3-4.
  • Wittenburg, P. (2004). The IMDI metadata concept. In S. F. Ferreira (Ed.), Workingmaterial on Building the LR&E Roadmap: Joint COCOSDA and ICCWLRE Meeting, (LREC2004). Paris: ELRA - European Language Resources Association.
  • Wittenburg, P. (2004). Training Course in Lithuania. Language Archive Newsletter, 1(2), 6-6.
  • Wittenburg, P., Brugman, H., Broeder, D., & Russel, A. (2004). XML-based language archiving. In Workshop Proceedings on XML-based Richly Annotaded Corpora (LREC2004) (pp. 63-69). Paris: ELRA - European Language Resources Association.
  • Wittenburg, P., Lenkiewicz, P., Auer, E., Gebre, B. G., Lenkiewicz, A., & Drude, S. (2012). AV Processing in eHumanities - a paradigm shift. In J. C. Meister (Ed.), Digital Humanities 2012 Conference Abstracts. University of Hamburg, Germany; July 16–22, 2012 (pp. 538-541).

    Abstract

    Introduction Speech research saw a dramatic change in paradigm in the 90-ies. While earlier the discussion was dominated by a phoneticians’ approach who knew about phenomena in the speech signal, the situation completely changed after stochastic machinery such as Hidden Markov Models [1] and Artificial Neural Networks [2] had been introduced. Speech processing was now dominated by a purely mathematic approach that basically ignored all existing knowledge about the speech production process and the perception mechanisms. The key was now to construct a large enough training set that would allow identifying the many free parameters of such stochastic engines. In case that the training set is representative and the annotations of the training sets are widely ‘correct’ we could assume to get a satisfyingly functioning recognizer. While the success of knowledge-based systems such as Hearsay II [3] was limited, the statistically based approach led to great improvements in recognition rates and to industrial applications.
  • Wittenburg, P., Gulrajani, G., Broeder, D., & Uneson, M. (2004). Cross-disciplinary integration of metadata descriptions. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC2004) (pp. 113-116). Paris: ELRA - European Language Resources Association.
  • Wittenburg, P., Dirksmeyer, R., Brugman, H., & Klaas, G. (2004). Digital formats for images, audio and video. Language Archive Newsletter, 1(1), 3-6.
  • Wittenburg, P., Johnson, H., Buchhorn, M., Brugman, H., & Broeder, D. (2004). Architecture for distributed language resource management and archiving. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC2004) (pp. 361-364). Paris: ELRA - European Language Resources Association.
  • Wittenburg, P. (2004). International Expert Meeting on Access Management for Distributed Language Archives. Language Archive Newsletter, 1(3), 12-12.
  • Wittenburg, P. (2004). Final review of INTERA. Language Archive Newsletter, 1(4), 11-12.
  • Wittenburg, P. (2004). LinguaPax Forum on Language Diversity, Sustainability, and Peace. Language Archive Newsletter, 1(3), 13-13.
  • Wittenburg, P. (2004). LREC conference 2004. Language Archive Newsletter, 1(3), 12-13.
  • Wittenburg, P. (2004). News from the Archive of the Max Planck Institute for Psycholinguistics. Language Archive Newsletter, 1(4), 12-12.
  • Wittenburg, P., Drude, S., & Broeder, D. (2012). Psycholinguistik. In H. Neuroth, S. Strathmann, A. Oßwald, R. Scheffel, J. Klump, & J. Ludwig (Eds.), Langzeitarchivierung von Forschungsdaten. Eine Bestandsaufnahme (pp. 83-108). Boizenburg: Verlag Werner Hülsbusch.

    Abstract

    5.1 Einführung in den Forschungsbereich Die Psycholinguistik ist der Bereich der Linguistik, der sich mit dem Zusammenhang zwischen menschlicher Sprache und dem Denken und anderen mentalen Prozessen beschäftigt, d.h. sie stellt sich einer Reihe von essentiellen Fragen wie etwa (1) Wie schafft es unser Gehirn, im Wesentlichen akustische und visuelle kommunikative Informationen zu verstehen und in mentale Repräsentationen umzusetzen? (2) Wie kann unser Gehirn einen komplexen Sachverhalt, den wir anderen übermitteln wollen, in eine von anderen verarbeitbare Sequenz von verbalen und nonverbalen Aktionen umsetzen? (3) Wie gelingt es uns, in den verschiedenen Phasen des Lebens Sprachen zu erlernen? (4) Sind die kognitiven Prozesse der Sprachverarbeitung universell, obwohl die Sprachsysteme derart unterschiedlich sind, dass sich in den Strukturen kaum Universalien finden lassen?
  • Wnuk, E., De Valk, J. M., Huisman, J. L. A., & Majid, A. (2017). Hot and cold smells: Odor-temperature associations across cultures. Frontiers in Psychology, 8: 1373. doi:10.3389/fpsyg.2017.01373.

    Abstract

    It is often assumed odors are associated with hot and cold temperature, since odor processing may trigger thermal sensations, such as coolness in the case of mint. It is unknown, however, whether people make consistent temperature associations for a variety of everyday odors, and, if so, what determines them. Previous work investigating the bases of cross-modal associations suggests a number of possibilities, including universal forces (e.g., perception), as well as culture-specific forces (e.g., language and cultural beliefs). In this study, we examined odor-temperature associations in three cultures—Maniq (N = 11), Thai (N = 24), and Dutch (N = 24)—who differ with respect to their cultural preoccupation with odors, their odor lexicons, and their beliefs about the relationship of odors (and odor objects) to temperature. Participants matched 15 odors to temperature by touching cups filled with hot or cold water, and described the odors in their native language. The results showed no consistent associations among the Maniq, and only a handful of consistent associations between odor and temperature among the Thai and Dutch. The consistent associations differed across the two groups, arguing against their universality. Further analysis revealed cross-modal associations could not be explained by language, but could be the result of cultural beliefs
  • Wnuk, E., & Majid, A. (2012). Olfaction in a hunter-gatherer society: Insights from language and culture. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 1155-1160). Austin, TX: Cognitive Science Society.

    Abstract

    According to a widely-held view among various scholars, olfaction is inferior to other human senses. It is also believed by many that languages do not have words for describing smells. Data collected among the Maniq, a small population of nomadic foragers in southern Thailand, challenge the above claims and point to a great linguistic and cultural elaboration of odor. This article presents evidence of the importance of olfaction in indigenous rituals and beliefs, as well as in the lexicon. The results demonstrate the richness and complexity of the domain of smell in Maniq society and thereby challenge the universal paucity of olfactory terms and insignificance of olfaction for humans.
  • Wong, M. M. K., Watson, L. M., & Becker, E. B. E. (2017). Recent advances in modelling of cerebellar ataxia using induced pluripotent stem cells. Journal of Neurology & Neuromedicine, 2(7), 11-15. doi:10.29245/2572.942X/2017/7.1134.

    Abstract

    The cerebellar ataxias are a group of incurable brain disorders that are caused primarily by the progressive dysfunction and degeneration of cerebellar Purkinje cells. The lack of reliable disease models for the heterogeneous ataxias has hindered the understanding of the underlying pathogenic mechanisms as well as the development of effective therapies for these devastating diseases. Recent advances in the field of induced pluripotent stem cell (iPSC) technology offer new possibilities to better understand and potentially reverse disease pathology. Given the neurodevelopmental phenotypes observed in several types of ataxias, iPSC-based models have the potential to provide significant insights into disease progression, as well as opportunities for the development of early intervention therapies. To date, however, very few studies have successfully used iPSC-derived cells to cerebellar ataxias. In this review, we focus on recent breakthroughs in generating human iPSC-derived Purkinje cells. We also highlight the future challenges that will need to be addressed in order to fully exploit these models for the modelling of the molecular mechanisms underlying cerebellar ataxias and the development of effective therapeutics.
  • Xiang, H., Dediu, D., Roberts, L., Van Oort, E., Norris, D., & Hagoort, P. (2012). The structural connectivity underpinning language aptitude, working memory and IQ in the perisylvian language network. Language Learning, 62(Supplement S2), 110-130. doi:10.1111/j.1467-9922.2012.00708.x.

    Abstract

    We carried out the first study on the relationship between individual language aptitude and structural connectivity of language pathways in the adult brain. We measured four components of language aptitude (vocabulary learning, VocL; sound recognition, SndRec; sound-symbol correspondence, SndSym; and grammatical inferencing, GrInf) using the LLAMA language aptitude test (Meara, 2005). Spatial working memory (SWM), verbal working memory (VWM) and IQ were also measured as control factors. Diffusion Tensor Imaging (DTI) was employed to investigate the structural connectivity of language pathways in the perisylvian language network. Principal Component Analysis (PCA) on behavioural measures suggests that a general ability might be important to the first stages of L2 acquisition. It also suggested that VocL, SndSy and SWM are more closely related to general IQ than SndRec and VocL, and distinguished the tasks specifically designed to tap into L2 acquisition (VocL, SndRec,SndSym and GrInf) from more generic measures (IQ, SWM and VWM). Regression analysis suggested significant correlations between most of these behavioural measures and the structural connectivity of certain language pathways, i.e., VocL and BA47-Parietal pathway, SndSym and inter-hemispheric BA45 pathway, GrInf and BA45-Temporal pathway and BA6-Temporal pathway, IQ and BA44-Parietal pathway, BA47-Parietal pathway, BA47-Temporal pathway and inter-hemispheric BA45 pathway, SWM and inter-hemispheric BA6 pathway and BA47-Parietal pathway, and VWM and BA47-Temporal pathway. These results are discussed in relation to relevant findings in the literature.
  • Xiang, H. (2012). The language networks of the brain. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    In recent decades, neuroimaging studies on the neural infrastructure of language are usually (or mostly) conducted with certain on-line language processing tasks. These functional neuroimaging studies helped to localize the language areas in the brain and to investigate the brain activity during explicit language processing. However, little is known about what is going on with the language areas when the brain is ‘at rest’, i.e., when there is no explicit language processing running. Taking advantage of the fcMRI and DTI techniques, this thesis is able to investigate the language function ‘off-line’ at the neuronal network level and the connectivity among language areas in the brain. Based on patient studies, the traditional, classical model on the perisylvian language network specifies a “Broca’ area – Arcuate Fasciculus – Werinicke’s area” loop (Ojemann 1991). With the help of modern neuroimaging techniques, researchers have been able to track language pathways that involve more brain structures than are in the classical model, and relate them to certain language functions. In such a background, a large part of this thesis made a contribution to the study of the topology of the language networks. It revealed that the language networks form a topographical functional connectivity pattern in the left hemisphere for the right-handers. This thesis also revealed the importance of structural hubs, such as Broca’s and Wernicke’s areas, which have more connectivity to other brain areas and play a central role in the language networks. Furthermore, this thesis revealed both functionally and structurally lateralized language networks in the brain. The consistency between what is found in this thesis and what has been known from previous functional studies seems to suggest, that the human brain is optimized and ‘ready’ for the language function even when there is currently no explicit language-processing running.
  • Yager, J., & Burenhult, N. (2017). Jedek: a newly discovered Aslian variety of Malaysia. Linguistic Typology, 21(3), 493-545. doi:10.1515/lingty-2017-0012.

    Abstract

    Jedek is a previously unrecognized variety of the Northern Aslian subgroup of the Aslian branch of the Austroasiatic language family. It is spoken by c. 280 individuals in the resettlement area of Sungai Rual, near Jeli in Kelantan state, Peninsular Malaysia. The community originally consisted of several bands of foragers along the middle reaches of the Pergau river. Jedek’s distinct status first became known during a linguistic survey carried out in the DOBES project Tongues of the Semang (2005-2011). This paper describes the process leading up to its discovery and provides an overview of its typological characteristics.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., & Hino, Y. (2017). The phonological unit of Japanese Kanji compounds: A masked priming investigation. Journal of Experimental Psychology: Human Perception and Performance, 43(7), 1303-1328. doi:10.1037/xhp0000374.

    Abstract

    Using the masked priming paradigm, we examined which phonological unit is used when naming Kanji compounds. Although the phonological unit in the Japanese language has been suggested to be the mora, Experiment 1 found no priming for mora-related Kanji prime-target pairs. In Experiment 2, significant priming was only found when Kanji pairs shared the whole sound of their initial Kanji characters. Nevertheless, when the same Kanji pairs used in Experiment 2 were transcribed into Kana, significant mora priming was observed in Experiment 3. In Experiment 4, matching the syllable structure and pitch-accent of the initial Kanji characters did not lead to mora priming, ruling out potential alternative explanations for the earlier absence of the effect. A significant mora priming effect was observed, however, when the shared initial mora constituted the whole sound of their initial Kanji characters in Experiments 5. Lastly, these results were replicated in Experiment 6. Overall, these results indicate that the phonological unit involved when naming Kanji compounds is not the mora but the whole sound of each Kanji character. We discuss how different phonological units may be involved when processing Kanji and Kana words as well as the implications for theories dealing with language production processes.
  • You, W., Zhang, Q., & Verdonschot, R. G. (2012). Masked syllable priming effects in word and picture naming in Chinese. PLoS One, 7(10): e46595. doi:10.1371/journal.pone.0046595.

    Abstract

    Four experiments investigated the role of the syllable in Chinese spoken word production. Chen, Chen and Ferrand (2003) reported a syllable priming effect when primes and targets shared the first syllable using a masked priming paradigm in Chinese. Our Experiment 1 was a direct replication of Chen et al.'s (2003) Experiment 3 employing CV (e. g., /ba2.ying2/, strike camp) and CVG (e. g., /bai2.shou3/, white haired) syllable types. Experiment 2 tested the syllable priming effect using different syllable types: e. g., CV (/qi4.qiu2/, balloon) and CVN (/qing1.ting2/, dragonfly). Experiment 3 investigated this issue further using line drawings of common objects as targets that were preceded either by a CV (e. g., /qi3/, attempt), or a CVN (e. g., /qing2/, affection) prime. Experiment 4 further examined the priming effect by a comparison between CV or CVN priming and an unrelated priming condition using CV-NX (e. g., /mi2.ni3/, mini) and CVN-CX (e. g., /min2.ju1/, dwellings) as target words. These four experiments consistently found that CV targets were named faster when preceded by CV primes than when they were preceded by CVG, CVN or unrelated primes, whereas CVG or CVN targets showed the reverse pattern. These results indicate that the priming effect critically depends on the match between the structure of the prime and that of the first syllable of the target. The effect obtained in this study was consistent across different stimuli and different tasks (word and picture naming), and provides more conclusive and consistent data regarding the role of the syllable in Chinese speech production.
  • Zampieri, M., & Gebre, B. G. (2012). Automatic identification of language varieties: The case of Portuguese. In J. Jancsary (Ed.), Proceedings of the Conference on Natural Language Processing 2012, September 19-21, 2012, Vienna (pp. 233-237). Vienna: Österreichischen Gesellschaft für Artificial Intelligende (ÖGAI).

    Abstract

    Automatic Language Identification of written texts is a well-established area of research in Computational Linguistics. State-of-the-art algorithms often rely on n-gram character models to identify the correct language of texts, with good results seen for European languages. In this paper we propose the use of a character n-gram model and a word n-gram language model for the automatic classification of two written varieties of Portuguese: European and Brazilian. Results reached 0.998 for accuracy using character 4-grams.
  • Zampieri, M., Gebre, B. G., & Diwersy, S. (2012). Classifying pluricentric languages: Extending the monolingual model. In Proceedings of SLTC 2012. The Fourth Swedish Language Technology Conference. Lund, October 24-26, 2012 (pp. 79-80). Lund University.

    Abstract

    This study presents a new language identification model for pluricentric languages that uses n-gram language models at the character and word level. The model is evaluated in two steps. The first step consists of the identification of two varieties of Spanish (Argentina and Spain) and two varieties of French (Quebec and France) evaluated independently in binary classification schemes. The second step integrates these language models in a six-class classification with two Portuguese varieties.
  • Zeshan, U. (2004). Basic English course taught in Indian Sign Language (Ali Yavar Young National Institute for Hearing Handicapped, Ed.). National Institute for the Hearing Handicapped: Mumbai.
  • Zeshan, U. (2004). Interrogative constructions in sign languages - Cross-linguistic perspectives. Language, 80(1), 7-39.

    Abstract

    This article reports on results from a broad crosslinguistic study based on data from thirty-five signed languages around the world. The study is the first of its kind, and the typological generalizations presented here cover the domain of interrogative structures as they appear across a wide range of geographically and genetically distinct signed languages. Manual and nonmanual ways of marking basic types of questions in signed languages are investigated. As a result, it becomes clear that the range of crosslinguistic variation is extensive for some subparameters, such as the structure of question-word paradigms, while other parameters, such as the use of nonmanual expressions in questions, show more similarities across signed languages. Finally, it is instructive to compare the findings from signed language typology to relevant data from spoken languages at a more abstract, crossmodality level.
  • Zeshan, U. (2004). Hand, head and face - negative constructions in sign languages. Linguistic Typology, 8(1), 1-58. doi:10.1515/lity.2004.003.

    Abstract

    This article presents a typology of negative constructions across a substantial number of sign languages from around the globe. After situating the topic within the wider context of linguistic typology, the main negation strategies found across sign languages are described. Nonmanual negation includes the use of head movements and facial expressions for negation and is of great importance in sign languages as well as particularly interesting from a typological point of view. As far as manual signs are concerned, independent negative particles represent the dominant strategy, but there are also instances of irregular negation in most sign languages. Irregular negatives may take the form of suppletion, cliticisation, affixing, or internal modification of a sign. The results of the study lead to interesting generalisations about similarities and differences between negatives in signed and spoken languages.
  • Zeshan, U., & De Vos, C. (Eds.). (2012). Sign languages in village communities: Anthropological and linguistic insights. Berlin: Mouton de Gruyter.

    Abstract

    The book is a unique collection of research on sign languages that have emerged in rural communities with a high incidence of, often hereditary, deafness. These sign languages represent the latest addition to the comparative investigation of languages in the gestural modality, and the book is the first compilation of a substantial number of different "village sign languages".Written by leading experts in the field, the volume uniquely combines anthropological and linguistic insights, looking at both the social dynamics and the linguistic structures in these village communities. The book includes primary data from eleven different signing communities across the world, including results from Jamaica, India, Turkey, Thailand, and Bali. All known village sign languages are endangered, usually because of pressure from larger urban sign languages, and some have died out already. Ironically, it is often the success of the larger sign language communities in urban centres, their recognition and subsequent spread, which leads to the endangerment of these small minority sign languages. The book addresses this specific type of language endangerment, documentation strategies, and other ethical issues pertaining to these sign languages on the basis of first-hand experiences by Deaf fieldworkers
  • Zhang, Y., & Yu, C. (2017). How misleading cues influence referential uncertainty in statistical cross-situational learning. In M. LaMendola, & J. Scott (Eds.), Proceedings of the 41st Annual Boston University Conference on Language Development (BUCLD 41) (pp. 820-833). Boston, MA: Cascadilla Press.
  • Zhen, Z., Kong, X., Huang, L., Yang, Z., Wang, X., Hao, X., Huang, T., Song, Y., & Liu, J. (2017). Quantifying the variability of scene-selective regions: Interindividual, interhemispheric, and sex differences. Human Brain Mapping, 38(4), 2260-2275. doi:10.1002/hbm.23519.

    Abstract

    Scene-selective regions (SSRs), including the parahippocampal place area (PPA), retrosplenial cortex (RSC), and transverse occipital sulcus (TOS), are among the most widely characterized functional regions in the human brain. However, previous studies have mostly focused on the commonality within each SSR, providing little information on different aspects of their variability. In a large group of healthy adults (N = 202), we used functional magnetic resonance imaging to investigate different aspects of topographical and functional variability within SSRs, including interindividual, interhemispheric, and sex differences. First, the PPA, RSC, and TOS were delineated manually for each individual. We then demonstrated that SSRs showed substantial interindividual variability in both spatial topography and functional selectivity. We further identified consistent interhemispheric differences in the spatial topography of all three SSRs, but distinct interhemispheric differences in scene selectivity. Moreover, we found that all three SSRs showed stronger scene selectivity in men than in women. In summary, our work thoroughly characterized the interindividual, interhemispheric, and sex variability of the SSRs and invites future work on the origin and functional significance of these variabilities. Additionally, we constructed the first probabilistic atlases for the SSRs, which provide the detailed anatomical reference for further investigations of the scene network.
  • Zhu, Z., Hagoort, P., Zhang, J. X., Feng, G., Chen, H.-C., Bastiaansen, M. C. M., & Wang, S. (2012). The anterior left inferior frontal gyrus contributes to semantic unification. NeuroImage, 60, 2230-2237. doi:10.1016/j.neuroimage.2012.02.036.

    Abstract

    Semantic unification, the process by which small blocks of semantic information are combined into a coherent utterance, has been studied with various types of tasks. However, whether the brain activations reported in these studies are attributed to semantic unification per se or to other task-induced concomitant processes still remains unclear. The neural basis for semantic unification in sentence comprehension was examined using event-related potentials (ERP) and functional Magnetic Resonance Imaging (fMRI). The semantic unification load was manipulated by varying the goodness of fit between a critical word and its preceding context (in high cloze, low cloze and violation sentences). The sentences were presented in a serial visual presentation mode. The participants were asked to perform one of three tasks: semantic congruency judgment (SEM), silent reading for comprehension (READ), or font size judgment (FONT), in separate sessions. The ERP results showed a similar N400 amplitude modulation by the semantic unification load across all of the three tasks. The brain activations associated with the semantic unification load were found in the anterior left inferior frontal gyrus (aLIFG) in the FONT task and in a widespread set of regions in the other two tasks. These results suggest that the aLIFG activation reflects a semantic unification, which is different from other brain activations that may reflect task-specific strategic processing.

    Additional information

    Zhu_2012_suppl.dot
  • De Zubicaray, G., & Fisher, S. E. (Eds.). (2017). Genes, brain and language [Special Issue]. Brain and Language, 172.
  • De Zubicaray, G., & Fisher, S. E. (2017). Genes, Brain, and Language: A brief introduction to the Special Issue. Brain and Language, 172, 1-2. doi:10.1016/j.bandl.2017.08.003.
  • Zwaan, R. A., Van der Stoep, N., Guadalupe, T., & Bouwmeester, S. (2012). Language comprehension in the balance: The robustness of the action-compatibility effect (ACE). PLoS One, 7(2), e31204. doi:10.1371/journal.pone.0031204.

    Abstract

    How does language comprehension interact with motor activity? We investigated the conditions under which comprehending an action sentence affects people's balance. We performed two experiments to assess whether sentences describing forward or backward movement modulate the lateral movements made by subjects who made sensibility judgments about the sentences. In one experiment subjects were standing on a balance board and in the other they were seated on a balance board that was mounted on a chair. This allowed us to investigate whether the action compatibility effect (ACE) is robust and persists in the face of salient incompatibilities between sentence content and subject movement. Growth-curve analysis of the movement trajectories produced by the subjects in response to the sentences suggests that the ACE is indeed robust. Sentence content influenced movement trajectory despite salient inconsistencies between implied and actual movement. These results are interpreted in the context of the current discussion of embodied, or grounded, language comprehension and meaning representation.
  • Zwitserlood, I. (2012). Classifiers. In R. Pfau, M. Steinbach, & B. Woll (Eds.), Sign Language: an International Handbook (pp. 158-186). Berlin: Mouton de Gruyter.

    Abstract

    Classifiers (currently also called 'depicting handshapes'), are observed in almost all signed languages studied to date and form a well-researched topic in sign language linguistics. Yet, these elements are still subject to much debate with respect to a variety of matters. Several different categories of classifiers have been posited on the basis of their semantics and the linguistic context in which they occur. The function(s) of classifiers are not fully clear yet. Similarly, there are differing opinions regarding their structure and the structure of the signs in which they appear. Partly as a result of comparison to classifiers in spoken languages, the term 'classifier' itself is under debate. In contrast to these disagreements, most studies on the acquisition of classifier constructions seem to consent that these are difficult to master for Deaf children. This article presents and discusses all these issues from the viewpoint that classifiers are linguistic elements.
  • Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2012). An empirical investigation of expression of multiple entities in Turkish Sign Language (TİD): Considering the effects of modality. Lingua, 122, 1636 -1667. doi:10.1016/j.lingua.2012.08.010.

    Abstract

    This paper explores the expression of multiple entities in Turkish Sign Language (Türk İşaret Dili; TİD), a less well-studied sign language. It aims to provide a comprehensive description of the ways and frequencies in which entity plurality in this language is expressed, both within and outside the noun phrase. We used a corpus that includes both elicited and spontaneous data from native signers. The results reveal that most of the expressions of multiple entities in TİD are iconic, spatial strategies (i.e. localization and spatial plural predicate inflection) none of which, we argue, should be considered as genuine plural marking devices with the main aim of expressing plurality. Instead, the observed devices for localization and predicate inflection allow for a plural interpretation when multiple locations in space are used. Our data do not provide evidence that TİD employs (productive) morphological plural marking (i.e. reduplication) on nouns, in contrast to some other sign languages and many spoken languages. We relate our findings to expression of multiple entities in other signed languages and in spoken languages and discuss these findings in terms of modality effects on expression of multiple entities in human language.

Share this page