Publications

Displaying 1401 - 1497 of 1497
  • Verheijen, J., & Sleegers, K. (2018). Understanding Alzheimer Disease at the interface between genetics and transcriptomics. Trends in Genetics, 34(6), 434-447. doi:10.1016/j.tig.2018.02.007.

    Abstract

    Over 25 genes are known to affect the risk of developing Alzheimer disease (AD), the most common neurodegenerative dementia. However, mechanistic insights and improved disease management remains limited, due to difficulties in determining the functional consequences of genetic associations. Transcriptomics is increasingly being used to corroborate or enhance interpretation of genetic discoveries. These approaches, which include second and third generation sequencing, single-cell sequencing, and bioinformatics, reveal allele-specific events connecting AD risk genes to expression profiles, and provide converging evidence of pathophysiological pathways underlying AD. Simultaneously, they highlight brain region- and cell-type-specific expression patterns, and alternative splicing events that affect the straightforward relation between a genetic variant and AD, re-emphasizing the need for an integrated approach of genetics and transcriptomics in understanding AD. © 2018 The Authors
  • Verhoeven, L., Schreuder, R., & Baayen, R. H. (2003). Units of analysis in reading Dutch bisyllabic pseudowords. Scientific Studies of Reading, 7(3), 255-271. doi:10.1207/S1532799XSSR0703_4.

    Abstract

    Two experiments were carried out to explore the units of analysis is used by children to read Dutch bisyllabic pseudowords. Although Dutch orthography is highly regular, several deviations from a one-to-one correspondence occur. In polysyllabic words, the grapheme e may represent three different vowels:/∊/, /e/, or /λ/. In Experiment 1, Grade 6 elementary school children were presented lists of bisyllabic pseudowords containing the grapheme e in the initial syllable representing a content morpheme, a prefix, or a random string. On the basis of general word frequency data, we expected the interpretation of the initial syllable as a random string to elicit the pronunciation of a stressed /e/, the interpretation of the initial syllable as a content morpheme to elicit the pronunciation of a stressed /∊/, the interpretation of the initial syllable as a content morpheme to elicit the pronunciation of a stressed /∊/, and the interpretation as a prefix to elicit the pronunciation of an unstressed /&lamda;/. We found both the pronunciation and the stress assignment for pseudowords to depend on word type, which shows morpheme boundaries and prefixes to be identified. However, the identification of prefixes could also be explained by the correspondence of the prefix boundaries in the pseudowords to syllable boundaries. To exclude this alternative explanation, a follow-up experiment with the same group of children was conducted using bisyllabic pseudowords containing prefixes that did not coincide with syllable boundaries versus similar pseudowords with no prefix. The results of the first experiment were replicated. That is, the children identified prefixes and shifted their assignment of word stress accordingly. The results are discussed with reference to a parallel dual-route model of word decoding
  • Vernes, S. C. (2017). What bats have to say about speech and language. Psychonomic Bulletin & Review, 24(1), 111-117. doi:10.3758/s13423-016-1060-3.

    Abstract

    Understanding the biological foundations of language is vital to gaining insight into how the capacity for language may have evolved in humans. Animal models can be exploited to learn about the biological underpinnings of shared human traits, and although no other animals display speech or language, a range of behaviors found throughout the animal kingdom are relevant to speech and spoken language. To date, such investigations have been dominated by studies of our closest primate relatives searching for shared traits, or more distantly related species that are sophisticated vocal communicators, like songbirds. Herein I make the case for turning our attention to the Chiropterans, to shed new light on the biological encoding and evolution of human language-relevant traits. Bats employ complex vocalizations to facilitate navigation as well as social interactions, and are exquisitely tuned to acoustic information. Furthermore, bats display behaviors such as vocal learning and vocal turn-taking that are directly pertinent for human spoken language. Emerging technologies are now allowing the study of bat vocal communication, from the behavioral to the neurobiological and molecular level. Although it is clear that no single animal model can reflect the complexity of human language, by comparing such findings across diverse species we can identify the shared biological mechanisms likely to have influenced the evolution of human language. Keywords
  • Vernes, S. C. (2018). Vocal learning in bats: From genes to behaviour. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 516-518). Toruń, Poland: NCU Press. doi:10.12775/3991-1.128.
  • Viebahn, M. C., Ernestus, M., & McQueen, J. M. (2012). Co-occurrence of reduced word forms in natural speech. In Proceedings of INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association (pp. 2019-2022).

    Abstract

    This paper presents a corpus study that investigates the co-occurrence of reduced word forms in natural speech. We extracted Dutch past participles from three different speech registers and investigated the influence of several predictor variables on the presence and duration of schwas in prefixes and /t/s in suffixes. Our results suggest that reduced word forms tend to co-occur even if we partial out the effect of speech rate. The implications of our findings for episodic and abstractionist models of lexical representation are discussed.
  • Viebahn, M., McQueen, J. M., Ernestus, M., Frauenfelder, U. H., & Bürki, A. (2018). How much does orthography influence the processing of reduced word forms? Evidence from novel-word learning about French schwa deletion. The Quarterly Journal of Experimental Psychology, 71(11), 2378-2394. doi:10.1177/1747021817741859.

    Abstract

    This study examines the influence of orthography on the processing of reduced word forms. For this purpose, we compared the impact of phonological variation with the impact of spelling-sound consistency on the processing of words that may be produced with or without the vowel schwa. Participants learnt novel French words in which the vowel schwa was present or absent in the first syllable. In Experiment 1, the words were consistently produced without schwa or produced in a variable manner (i.e., sometimes produced with and sometimes produced without schwa). In Experiment 2, words were always produced in a consistent manner, but an orthographic exposure phase was included in which words that were produced without schwa were either spelled with or without the letter . Results from naming and eye-tracking tasks suggest that both phonological variation and spelling-sound consistency influence the processing of spoken novel words. However, the influence of phonological variation outweighs the effect of spelling-sound consistency. Our findings therefore suggest that the influence of orthography on the processing of reduced word forms is relatively small.
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2017). Speaking style influences the brain’s electrophysiological response to grammatical errors in speech comprehension. Journal of Cognitive Neuroscience, 29(7), 1132-1146. doi:10.1162/jocn_a_01095.

    Abstract

    This electrophysiological study asked whether the brain processes grammatical gender
    violations in casual speech differently than in careful speech. Native speakers of Dutch were
    presented with utterances that contained adjective-noun pairs in which the adjective was either
    correctly inflected with a word-final schwa (e.g. een spannende roman “a suspenseful novel”) or
    incorrectly uninflected without that schwa (een spannend roman). Consistent with previous
    findings, the uninflected adjectives elicited an electrical brain response sensitive to syntactic
    violations when the talker was speaking in a careful manner. When the talker was speaking in a
    casual manner, this response was absent. A control condition showed electrophysiological responses
    for carefully as well as casually produced utterances with semantic anomalies, showing that
    listeners were able to understand the content of both types of utterance. The results suggest that
    listeners take information about the speaking style of a talker into account when processing the
    acoustic-phonetic information provided by the speech signal. Absent schwas in casual speech are
    effectively not grammatical gender violations. These changes in syntactic processing are evidence
    of contextually-driven neural flexibility.

    Files private

    Request files
  • Vogels, J., & Van Bergen, G. (2017). Where to place inaccessible subjects in Dutch: The role of definiteness and animacy. Corpus linguistics and linguistic theory, 13(2), 369-398. doi:10.1515/cllt-2013-0021.

    Abstract

    Cross-linguistically, both subjects and topical information tend to be placed at the beginning of a sentence. Subjects are generally highly topical, causing both tendencies to converge on the same word order. However, subjects that lack prototypical topic properties may give rise to an incongruence between the preference to start a sentence with the subject and the preference to start a sentence with the most accessible information. We present a corpus study in which we investigate in what syntactic position (preverbal or postverbal) such low-accessible subjects are typically found in Dutch natural language. We examine the effects of both discourse accessibility (definiteness) and inherent accessibility (animacy). Our results show that definiteness and animacy interact in determining subject position in Dutch. Non-referential (bare) subjects are less likely to occur in preverbal position than definite subjects, and this tendency is reinforced when the subject is inanimate. This suggests that these two properties that make the subject less accessible together can ‘gang up’ against the subject first preference. The results support a probabilistic multifactorial account of syntactic variation.
  • Volker-Touw, C. M., de Koning, H. D., Giltay, J., De Kovel, C. G. F., van Kempen, T. S., Oberndorff, K., Boes, M., van Steensel, M. A., van Well, G. T., Blokx, W. A., Schalkwijk, J., Simon, A., Frenkel, J., & van Gijn, M. E. (2017). Erythematous nodes, urticarial rash and arthralgias in a large pedigree with NLRC4-related autoinflammatory disease, expansion of the phenotype. British Journal of Dermatology, 176(1), 244-248. doi:10.1111/bjd.14757.

    Abstract

    Autoinflammatory disorders (AID) are a heterogeneous group of diseases, characterized by an unprovoked innate immune response, resulting in recurrent or ongoing systemic inflammation and fever1-3. Inflammasomes are protein complexes with an essential role in pyroptosis and the caspase-1-mediated activation of the proinflammatory cytokines IL-1β, IL-17 and IL-18.
  • Von Stutterheim, C., Carroll, M., & Klein, W. (2003). Two ways of construing complex temporal structures. In F. Lenz (Ed.), Deictic conceptualization of space, time and person (pp. 97-133). Amsterdam: Benjamins.
  • Von Holzen, K., & Bergmann, C. (2018). A Meta-Analysis of Infants’ Mispronunciation Sensitivity Development. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 1159-1164). Austin, TX: Cognitive Science Society.

    Abstract

    Before infants become mature speakers of their native language, they must acquire a robust word-recognition system which allows them to strike the balance between allowing some variation (mood, voice, accent) and recognizing variability that potentially changes meaning (e.g. cat vs hat). The current meta-analysis quantifies how the latter, termed mispronunciation sensitivity, changes over infants’ first three years, testing competing predictions of mainstream language acquisition theories. Our results show that infants were sensitive to mispronunciations, but accepted them as labels for target objects. Interestingly, and in contrast to predictions of mainstream theories, mispronunciation sensitivity was not modulated by infant age, suggesting that a sufficiently flexible understanding of native language phonology is in place at a young age.
  • von Stutterheim, C., Andermann, M., Carroll, M., Flecken, M., & Schmiedtova, B. (2012). How grammaticized concepts shape event conceptualization in language production: Insights from linguistic analysis, eye tracking data, and memory performance. Linguistics, 50(4), 833-867. doi:10.1515/ling-2012-0026.

    Abstract

    The role of grammatical systems in profiling particular conceptual categories is used as a key in exploring questions concerning language specificity during the conceptualization phase in language production. This study focuses on the extent to which crosslinguistic differences in the concepts profiled by grammatical means in the domain of temporality (grammatical aspect) affect event conceptualization and distribution of attention when talking about motion events. The analyses, which cover native speakers of Standard Arabic, Czech, Dutch, English, German, Russian and Spanish, not only involve linguistic evidence, but also data from an eye tracking experiment and a memory test. The findings show that direction of attention to particular parts of motion events varies to some extent with the existence of grammaticized means to express imperfective/progressive aspect. Speakers of languages that do not have grammaticized aspect of this type are more likely to take a holistic view when talking about motion events and attend to as well as refer to endpoints of motion events, in contrast to speakers of aspect languages.

    Files private

    Request files
  • Vonk, W., & Cozijn, R. (2003). On the treatment of saccades and regressions in eye movement measures of reading time. In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind's eye: Cognitive and applied aspects of eye movement research (pp. 291-312). Amsterdam: Elsevier.
  • De Vos, C. (2012). Sign-spatiality in Kata Kolok: How a village sign language in Bali inscribes its signing space. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    In a small village in the north of Bali called Bengkala, relatively many people inherit deafness. The Balinese therefore refer to this village as Desa Kolok, which means 'deaf village'. Connie de Vos studied Kata Kolok, the sign language of this village, and the ways in which the language recruits space to talk about both spatial and non-spatial matters. he small village community Bengkala in the north of Bali has almost 3,000 inhabitants. Of all the inhabitants, 57% use sign language, with varying degrees of fluency. But of this signing community (between 1,200 and 1,800 signers, depending on your definition of 'signer'), only 4% are deaf. So, not only do the deaf people of Bengkala use the sign language Kata Kolok, but also the majority of the hearing population.
    "I've worked with deaf people from all over Asia, Europe, and also some signers in America," says Connie de Vos of MPI's Language and Cognition Department, and Centre for Language Studies (RU). "What sets apart this particular deaf village is that deaf individuals are highly integrated within the village clans. There is really a huge proportion of hearing signers." The sign language currently functions in all major aspects of village life and has been acquired from birth by multiple generations of deaf, native signers. According to De Vos, Kata Kolok is a fully-fledged sign language in every sense of the word. As a collaborative project, she has initiated inclusive deaf education within the village and now Kata Kolok is used as the primary language of instruction. De Vos' primary finding is that Kata Kolok discourse uses a different system of referring to space than other sign languages. Spatial relations are represented by a so-called "absolute frame of reference", based on geographic locations and wind directions. "All sign languages, as we know, use relative constructions for spatial relations. They use signs comparable to words like 'left' and 'right' instead of 'east' and 'west'. Kata Kolok does the latter. Kata Kolok signers appear to have an internal compass to continually register their position in space."De Vos is the first sign linguist who has documented Kata Kolok extensively. She spent more than a year in the village and collected over a hundred hours of video material of spontaneous conversations. "One of the things I've noticed is that language doesn't really emerge out of nothing," she says. "Signers adopt a local gesture system and transform it into a new and much more systematic sign language. A lot of the signs refer to concepts they're familiar with. That's why hearing signers have no difficulties in picking up Kata Kolok. Kata Kolok unites the hearing and the deaf.

    Additional information

    full text via Radboud Repository
  • De Vos, J., Schriefers, H., Nivard, M. C., & Lemhöfer, K. (2018). A meta‐analysis and meta‐regression of incidental second language word learning from spoken input. Language Learning, 68(4), 906-941. doi:10.1111/lang.12296.

    Abstract

    We meta‐analyzed the effectiveness of incidental second language word learning from spoken input. Our sample contained 105 effect sizes from 32 primary studies employing meaning‐focused word‐learning activities with 1,964 participants with typical cognitive functioning. The random‐effects meta‐analysis yielded a mean effect size of g = 1.05, reflecting generally large vocabulary gains from spoken input in meaning‐focused activities. A meta‐regression with three substantive and two methodological predictors also revealed that adult participants outperformed children in terms of word learning and that interactive learning tasks were more effective than noninteractive ones. Furthermore, learning scores were higher when measured with recognition than with recall tests. Methodologically, the use of a no‐input control group seemed to protect against an overestimation of learning effects, evidenced by smaller effect sizes. Finally, whether a pretest–posttest design was used did not influence effect sizes. All data and the analysis script are publicly available.
  • De Vos, C., & Palfreyman, N. (2012). [Review of the book Deaf around the World: The impact of language / ed. by Mathur & Napoli]. Journal of Linguistics, 48, 731 -735.

    Abstract

    First paragraph. Since its advent half a century ago, the field of sign language linguistics has had close ties to education and the empowerment of deaf communities, a union that is fittingly celebrated by Deaf around the world: The impact of language. With this fruitful relationship in mind, sign language researchers and deaf educators gathered in Philadelphia in 2008, and in the volume under review, Gaurav Mathur & Donna Jo Napoli (henceforth M&N) present a selection of papers from this conference, organised in two parts: ‘Sign languages: Creation, context, form’, and ‘Social issues/civil rights ’. Each of the chapters is accompanied by a response chapter on the same or a related topic. The first part of the volume focuses on the linguistics of sign languages and includes papers on the impact of language modality on morphosyntax, second language acquisition, and grammaticalisation, highlighting the fine balance that sign linguists need to strike when conducting methodologically sound research. The second part of the book includes accounts by deaf activists from countries including China, India, Japan, Kenya, South Africa and Sweden who are considered prominent figures in areas such as deaf education, politics, culture and international development.
  • De Vos, C., & Zeshan, U. (2012). Introduction: Demographic, sociocultural, and linguistic variation across rural signing communities. In U. Zeshan, & C. de Vos (Eds.), Sign languages in village communities: Anthropological and linguistic insights (pp. 2-23). Berlin: Mouton De Gruyter.
  • De Vos, C., & Nyst, V.A.S (2018). Introduction: The time-depth and typology of rural sign languages. Sign Language Studies, 18(4), 477-487.
  • De Vos, C. (2012). Kata Kolok: An updated sociolinguistic profile. In U. Zeshan (Ed.), Sign languages in village communities: Anthropological and linguistic insights (pp. 381-386). Berlin: Mouton de Gruyter.
  • De Vos, J., Schriefers, H., & Lemhöfer, K. (2018). Noticing vocabulary holes aids incidental second language word learning: An experimental study. Bilingualism: Language and Cognition, 22(3), 500-515. doi:10.1017/S1366728918000019.

    Abstract

    Noticing the hole (NTH) occurs when speakers want to say something, but realise they do not know the right word(s). Such awareness of lacking knowledge supposedly facilitates the acquisition of the unknown word(s) from later input (Swain, 1993). We tested this claim by experimentally inducing NTH in a second language (L2) for some participants (experimental), but not others (control). Then, in a price comparison game, all participants were exposed to spoken L2 input containing the to-be-learned words. They were unaware of taking part in an L2 study. Post-tests showed that participants who had noticed holes in their vocabulary had indeed learned more words compared to participants who had not. This held both for the experimental group as well as those participants in the control group who later reported to have noticed holes. Thus, when we become aware of vocabulary holes, the first step to improve our vocabulary is already taken.
  • De Vos, C. (2012). The Kata Kolok perfective in child signing: Coordination of manual and non-manual components. In U. Zeshan, & C. De Vos (Eds.), Sign languages in village communities: Anthropological and linguistic insights (pp. 127-152). Berlin: Mouton de Gruyter.
  • Vosse, T., & Kempen, G. (1991). A hybrid model of human sentence processing: Parsing right-branching, center-embedded and cross-serial dependencies. In M. Tomita (Ed.), Proceedings of the Second International Workshop on Parsing Technologies.
  • De Vries, C., Reijnierse, W. G., & Willems, R. M. (2018). Eye movements reveal readers’ sensitivity to deliberate metaphors during narrative reading. Scientific Study of Literature, 8(1), 135-164. doi:10.1075/ssol.18008.vri.

    Abstract

    Metaphors occur frequently in literary texts. Deliberate Metaphor Theory (DMT; e.g., Steen, 2017) proposes that metaphors that serve a communicative function as metaphor are radically different from metaphors that do not have this function. We investigated differences in processing between deliberate and non-deliberate metaphors, compared to non-metaphorical words in literary reading. Using the Deliberate Metaphor Identification Procedure (Reijnierse et al., 2018), we identified metaphors in two literary stories. Then, eye-tracking was used to investigate participants’ (N = 72) reading behavior. Deliberate metaphors were read slower than non-deliberate metaphors, and both metaphor types were read slower than non-metaphorical words. Differences were controlled for several psycholinguistic variables. Differences in reading behavior were related to individual differences in reading experience and absorption and appreciation of the story. These results are in line with predictions from DMT and underline the importance of distinguishing between metaphor types in the experimental study of literary reading.
  • De Vries, M. H., Petersson, K. M., Geukes, S., Zwitserlood, P., & Christiansen, M. H. (2012). Processing multiple non-adjacent dependencies: Evidence from sequence learning. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 2065-2076. doi:10.1098/rstb.2011.0414.

    Abstract

    Processing non-adjacent dependencies is considered to be one of the hallmarks of human language. Assuming that sequence-learning tasks provide a useful way to tap natural-language-processing mechanisms, we cross-modally combined serial reaction time and artificial-grammar learning paradigms to investigate the processing of multiple nested (A1A2A3B3B2B1) and crossed dependencies (A1A2A3B1B2B3), containing either three or two dependencies. Both reaction times and prediction errors highlighted problems with processing the middle dependency in nested structures (A1A2A3B3_B1), reminiscent of the ‘missing-verb effect’ observed in English and French, but not with crossed structures (A1A2A3B1_B3). Prior linguistic experience did not play a major role: native speakers of German and Dutch—which permit nested and crossed dependencies, respectively—showed a similar pattern of results for sequences with three dependencies. As for sequences with two dependencies, reaction times and prediction errors were similar for both nested and crossed dependencies. The results suggest that constraints on the processing of multiple non-adjacent dependencies are determined by the specific ordering of the non-adjacent dependencies (i.e. nested or crossed), as well as the number of non-adjacent dependencies to be resolved (i.e. two or three). Furthermore, these constraints may not be specific to language but instead derive from limitations on structured sequence learning.
  • Vromans, R. D., & Jongman, S. R. (2018). The interplay between selective and nonselective inhibition during single word production. PLoS One, 13(5): e0197313. doi:10.1371/journal.pone.0197313.

    Abstract

    The present study investigated the interplay between selective inhibition (the ability to suppress specific competing responses) and nonselective inhibition (the ability to suppress any inappropriate response) during single word production. To this end, we combined two well-established research paradigms: the picture-word interference task and the stop-signal task. Selective inhibition was assessed by instructing participants to name target pictures (e.g., dog) in the presence of semantically related (e.g., cat) or unrelated (e.g., window) distractor words. Nonselective inhibition was tested by occasionally presenting a visual stop-signal, indicating that participants should withhold their verbal response. The stop-signal was presented early (250 ms) aimed at interrupting the lexical selection stage, and late (325 ms) to influence the word-encoding stage of the speech production process. We found longer naming latencies for pictures with semantically related distractors than with unrelated distractors (semantic interference effect). The results further showed that, at both delays, stopping latencies (i.e., stop-signal RTs) were prolonged for naming pictures with semantically related distractors compared to pictures with unrelated distractors. Taken together, our findings suggest that selective and nonselective inhibition, at least partly, share a common inhibitory mechanism during different stages of the speech production process.

    Additional information

    Data available (link to Figshare)
  • Wagensveld, B., Segers, E., Van Alphen, P. M., Hagoort, P., & Verhoeven, L. (2012). A neurocognitive perspective on rhyme awareness: The N450 rhyme effect. Brain Research, 1483, 63-70. doi:10.1016/j.brainres.2012.09.018.

    Abstract

    Rhyme processing is reflected in the electrophysiological signals of the brain as a negative deflection for non-rhyming as compared to rhyming stimuli around 450 ms after stimulus onset. Studies have shown that this N450 component is not solely sensitive to rhyme but also responds to other types of phonological overlap. In the present study, we examined whether the N450 component can be used to gain insight into the global similarity effect, indicating that rhyme judgment skills decrease when participants are presented with word pairs that share a phonological overlap but do not rhyme (e.g., bell–ball). We presented 20 adults with auditory rhyming, globally similar overlapping and unrelated word pairs. In addition to measuring behavioral responses by means of a yes/no button press, we also took EEG measures. The behavioral data showed a clear global similarity effect; participants judged overlapping pairs more slowly than unrelated pairs. However, the neural outcomes did not provide evidence that the N450 effect responds differentially to globally similar and unrelated word pairs, suggesting that globally similar and dissimilar non-rhyming pairs are processed in a similar fashion at the stage of early lexical access.
  • Wagensveld, B., Van Alphen, P. M., Segers, E., & Verhoeven, L. (2012). The nature of rhyme processing in preliterate children. British Journal of Educational Psychology, 82, 672-689. doi:10.1111/j.2044-8279.2011.02055.x.

    Abstract

    Background. Rhyme awareness is one of the earliest forms of phonological awareness to develop and is assessed in many developmental studies by means of a simple rhyme task. The influence of more demanding experimental paradigms on rhyme judgment performance is often neglected. Addressing this issue may also shed light on whether rhyme processing is more global or analytical in nature. Aims. The aim of the present study was to examine whether lexical status and global similarity relations influenced rhyme judgments in kindergarten children and if so, if there is an interaction between these two factors. Sample. Participants were 41 monolingual Dutch-speaking preliterate kindergartners (average age 6.0 years) who had not yet received any formal reading education. Method. To examine the effects of lexical status and phonological similarity processing, the kindergartners were asked to make rhyme judgements on (pseudo) word targets that rhymed, phonologically overlapped or were unrelated to (pseudo) word primes. Results. Both a lexicality effect (pseudo-words were more difficult than words) and a global similarity effect (globally similar non-rhyming items were more difficult to reject than unrelated items) were observed. In addition, whereas in words the global similarity effect was only present in accuracy outcomes, in pseudo-words it was also observed in the response latencies. Furthermore, a large global similarity effect in pseudo-words correlated with a low score on short-term memory skills and grapheme knowledge. Conclusions. Increasing task demands led to a more detailed assessment of rhyme processing skills. Current assessment paradigms should therefore be extended with more demanding conditions. In light of the views on rhyme processing, we propose that a combination of global and analytical strategies is used to make a correct rhyme judgment.
  • Wagner, A., & Braun, A. (2003). Is voice quality language-dependent? Acoustic analyses based on speakers of three different languages. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 651-654). Adelaide: Causal Productions.
  • Walker, R. M., Hill, A. E., Newman, A. C., Hamilton, G., Torrance, H. S., Anderson, S. M., Ogawa, F., Derizioti, P., Nicod, J., Vernes, S. C., Fisher, S. E., Thomson, P. A., Porteous, D. J., & Evans, K. L. (2012). The DISC1 promoter: Characterization and regulation by FOXP2. Human Molecular Genetics, 21, 2862-2872. doi:10.1093/hmg/dds111.

    Abstract

    Disrupted in schizophrenia 1 (DISC1) is a leading candidate susceptibility gene for schizophrenia, bipolar disorder, and recurrent major depression, which has been implicated in other psychiatric illnesses of neurodevelopmental origin, including autism. DISC1 was initially identified at the breakpoint of a balanced chromosomal translocation, t(1;11) (q42.1;14.3), in a family with a high incidence of psychiatric illness. Carriers of the translocation show a 50% reduction in DISC1 protein levels, suggesting altered DISC1 expression as a pathogenic mechanism in psychiatric illness. Altered DISC1 expression in the post-mortem brains of individuals with psychiatric illness and the frequent implication of non-coding regions of the gene by association analysis further support this assertion. Here, we provide the first characterisation of the DISC1 promoter region. Using dual luciferase assays, we demonstrate that a region -300bp to -177bp relative to the transcription start site (TSS) contributes positively to DISC1 promoter activity, whilst a region -982bp to -301bp relative to the TSS confers a repressive effect. We further demonstrate inhibition of DISC1 promoter activity and protein expression by FOXP2, a transcription factor implicated in speech and language function. This inhibition is diminished by two distinct FOXP2 point mutations, R553H and R328X, which were previously found in families affected by developmental verbal dyspraxia (DVD). Our work identifies an intriguing mechanistic link between neurodevelopmental disorders that have traditionally been viewed as diagnostically distinct but which do share varying degrees of phenotypic overlap.
  • Waller, D., & Haun, D. B. M. (2003). Scaling techniques for modeling directional knowledge. Behavior Research Methods, Instruments, & Computers, 35(2), 285-293.

    Abstract

    A common way for researchers to model or graphically portray spatial knowledge of a large environment is by applying multidimensional scaling (MDS) to a set of pairwise distance estimations. We introduce two MDS-like techniques that incorporate people’s knowledge of directions instead of (or in addition to) their knowledge of distances. Maps of a familiar environment derived from these procedures were more accurate and were rated by participants as being more accurate than those derived from nonmetric MDS. By incorporating people’s relatively accurate knowledge of directions, these methods offer spatial cognition researchers and behavioral geographers a sharper analytical tool than MDS for studying cognitive maps.
  • Wang, L., Jensen, O., Van den Brink, D., Weder, N., Schoffelen, J.-M., Magyari, L., Hagoort, P., & Bastiaansen, M. C. M. (2012). Beta oscillations relate to the N400m during language comprehension. Human Brain Mapping, 33, 2898-2912. doi:10.1002/hbm.21410.

    Abstract

    The relationship between the evoked responses (ERPs/ERFs) and the event-related changes in EEG/MEG power that can be observed during sentence-level language comprehension is as yet unclear. This study addresses a possible relationship between MEG power changes and the N400m component of the event-related field. Whole-head MEG was recorded while subjects listened to spoken sentences with incongruent (IC) or congruent (C) sentence endings. A clear N400m was observed over the left hemisphere, and was larger for the IC sentences than for the C sentences. A time–frequency analysis of power revealed a decrease in alpha and beta power over the left hemisphere in roughly the same time range as the N400m for the IC relative to the C condition. A linear regression analysis revealed a positive linear relationship between N400m and beta power for the IC condition, not for the C condition. No such linear relation was found between N400m and alpha power for either condition. The sources of the beta decrease were estimated in the LIFG, a region known to be involved in semantic unification operations. One source of the N400m was estimated in the left superior temporal region, which has been related to lexical retrieval. We interpret our data within a framework in which beta oscillations are inversely related to the engagement of task-relevant brain networks. The source reconstructions of the beta power suppression and the N400m effect support the notion of a dynamic communication between the LIFG and the left superior temporal region during language comprehension.
  • Wang, L., Hagoort, P., & Jensen, O. (2018). Language prediction is reflected by coupling between frontal gamma and posterior alpha oscillations. Journal of Cognitive Neuroscience, 30(3), 432-447. doi:10.1162/jocn_a_01190.

    Abstract

    Readers and listeners actively predict upcoming words during language processing. These predictions might serve to support the unification of incoming words into sentence context and thus rely on interactions between areas in the language network. In the current magnetoencephalography study, participants read sentences that varied in contextual constraints so that the predictability of the sentence-final words was either high or low. Before the sentence-final words, we observed stronger alpha power suppression for the highly compared with low constraining sentences in the left inferior frontal cortex, left posterior temporal region, and visual word form area. Importantly, the temporal and visual word form area alpha power correlated negatively with left frontal gamma power for the highly constraining sentences. We suggest that the correlation between alpha power decrease in temporal language areas and left prefrontal gamma power reflects the initiation of an anticipatory unification process in the language network.
  • Wang, L., Hagoort, P., & Jensen, O. (2018). Gamma oscillatory activity related to language prediction. Journal of Cognitive Neuroscience, 30(8), 1075-1085. doi:10.1162/jocn_a_01275.

    Abstract

    Using magnetoencephalography, the current study examined gamma activity associated with language prediction. Participants read high- and low-constraining sentences in which the final word of the sentence was either expected or unexpected. Although no consistent gamma power difference induced by the sentence-final words was found between the expected and unexpected conditions, the correlation of gamma power during the prediction and activation intervals of the sentence-final words was larger when the presented words matched with the prediction compared with when the prediction was violated or when no prediction was available. This suggests that gamma magnitude relates to the match between predicted and perceived words. Moreover, the expected words induced activity with a slower gamma frequency compared with that induced by unexpected words. Overall, the current study establishes that prediction is related to gamma power correlations and a slowing of the gamma frequency.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2012). Information structure influences depth of syntactic processing: Event-related potential evidence for the Chomsky illusion. PLoS One, 7(10), e47917. doi:10.1371/journal.pone.0047917.

    Abstract

    Information structure facilitates communication between interlocutors by highlighting relevant information. It has previously been shown that information structure modulates the depth of semantic processing. Here we used event-related potentials to investigate whether information structure can modulate the depth of syntactic processing. In question-answer pairs, subtle (number agreement) or salient (phrase structure) syntactic violations were placed either in focus or out of focus through information structure marking. P600 effects to these violations reflect the depth of syntactic processing. For subtle violations, a P600 effect was observed in the focus condition, but not in the non-focus condition. For salient violations, comparable P600 effects were found in both conditions. These results indicate that information structure can modulate the depth of syntactic processing, but that this effect depends on the salience of the information. When subtle violations are not in focus, they are processed less elaborately. We label this phenomenon the Chomsky illusion.
  • Wang, L., Zhu, Z., & Bastiaansen, M. C. M. (2012). Integration or predictability? A further specification of the functional role of gamma oscillations in language comprehension. Frontiers in Psychology, 3, 187. doi:10.3389/fpsyg.2012.00187.

    Abstract

    Gamma-band neuronal synchronization during sentence-level language comprehension has previously been linked with semantic unification. Here, we attempt to further narrow down the functional significance of gamma during language comprehension, by distinguishing between two aspects of semantic unification: successful integration of word meaning into the sentence context, and prediction of upcoming words. We computed event-related potentials (ERPs) and frequency band-specific electroencephalographic (EEG) power changes while participants read sentences that contained a critical word (CW) that was (1) both semantically congruent and predictable (high cloze, HC), (2) semantically congruent but unpredictable (low cloze, LC), or (3) semantically incongruent (and therefore also unpredictable; semantic violation, SV). The ERP analysis showed the expected parametric N400 modulation (HC < LC < SV). The time-frequency analysis showed qualitatively different results. In the gamma-frequency range, we observed a power increase in response to the CW in the HC condition, but not in the LC and the SV conditions. Additionally, in the theta frequency range we observed a power increase in the SV condition only. Our data provide evidence that gamma power increases are related to the predictability of an upcoming word based on the preceding sentence context, rather than to the integration of the incoming word’s semantics into the preceding context. Further, our theta band data are compatible with the notion that theta band synchronization in sentence comprehension might be related to the detection of an error in the language input.
  • Wang, M., Shao, Z., Chen, Y., & Schiller, N. O. (2018). Neural correlates of spoken word production in semantic and phonological blocked cyclic naming. Language, Cognition and Neuroscience, 33(5), 575-586. doi:10.1080/23273798.2017.1395467.

    Abstract

    The blocked cyclic naming paradigm has been increasingly employed to investigate the mechanisms underlying spoken word production. Semantic homogeneity typically elicits longer naming latencies than heterogeneity; however, it is debated whether competitive lexical selection or incremental learning underlies this effect. The current study manipulated both semantic and phonological homogeneity and used behavioural and electrophysiological measurements to provide evidence that can distinguish between the two accounts. Results show that naming latencies are longer in semantically homogeneous blocks, but shorter in phonologically homogeneous blocks, relative to heterogeneity. The semantic factor significantly modulates electrophysiological waveforms from 200 ms and the phonological factor from 350 ms after picture presentation. A positive component was demonstrated in both manipulations, possibly reflecting a task-related top-down bias in performing blocked cyclic naming. These results provide novel insights into the neural correlates of blocked cyclic naming and further contribute to the understanding of spoken word production.
  • Wanke, K., Devanna, P., & Vernes, S. C. (2018). Understanding neurodevelopmental disorders: The promise of regulatory variation in the 3’UTRome. Biological Psychiatry, 83(7), 548-557. doi:10.1016/j.biopsych.2017.11.006.

    Abstract

    Neurodevelopmental disorders have a strong genetic component, but despite widespread efforts, the specific genetic factors underlying these disorders remain undefined for a large proportion of affected individuals. Given the accessibility of exome-sequencing, this problem has thus far been addressed from a protein-centric standpoint; however, protein-coding regions only make up ∼1-2% of the human genome. With the advent of whole-genome sequencing we are in the midst of a paradigm shift as it is now possible to interrogate the entire sequence of the human genome (coding and non-coding) to fill in the missing heritability of complex disorders. These new technologies bring new challenges, as the number of non-coding variants identified per individual can be overwhelming, making it prudent to focus on non-coding regions of known function, for which the effects of variation can be predicted and directly tested to assess pathogenicity. The 3’UTRome is a region of the non-coding genome that perfectly fulfils these criteria and is of high interest when searching for pathogenic variation related to complex neurodevelopmental disorders. Herein, we review the regulatory roles of the 3’UTRome as binding sites for microRNAs, RNA binding proteins or during alternative polyadenylation. We detail existing evidence that these regions contribute to neurodevelopmental disorders and outline strategies for identification and validation of novel putatively pathogenic variation in these regions. This evidence suggests that studying the 3’UTRome will lead to the identification of new risk factors, new candidate disease genes and a better understanding of the molecular mechanisms contributing to NDDs.

    Additional information

    1-s2.0-S0006322317321911-mmc1.pdf
  • Warner, N. (2003). Rapid perceptibility as a factor underlying universals of vowel inventories. In A. Carnie, H. Harley, & M. Willie (Eds.), Formal approaches to function in grammar, in honor of Eloise Jelinek (pp. 245-261). Amsterdam: Benjamins.
  • Warner, N., & Cutler, A. (2017). Stress effects in vowel perception as a function of language-specific vocabulary patterns. Phonetica, 74, 81-106. doi:10.1159/000447428.

    Abstract

    Background/Aims: Evidence from spoken word recognition suggests that for English listeners, distinguishing full versus reduced vowels is important, but discerning stress differences involving the same full vowel (as in mu- from music or museum) is not. In Dutch, in contrast, the latter distinction is important. This difference arises from the relative frequency of unstressed full vowels in the two vocabularies. The goal of this paper is to determine how this difference in the lexicon influences the perception of stressed versus unstressed vowels. Methods: All possible sequences of two segments (diphones) in Dutch and in English were presented to native listeners in gated fragments. We recorded identification performance over time throughout the speech signal. The data were here analysed specifically for patterns in perception of stressed versus unstressed vowels. Results: The data reveal significantly larger stress effects (whereby unstressed vowels are harder to identify than stressed vowels) in English than in Dutch. Both language-specific and shared patterns appear regarding which vowels show stress effects. Conclusion: We explain the larger stress effect in English as reflecting the processing demands caused by the difference in use of unstressed vowels in the lexicon. The larger stress effect in English is due to relative inexperience with processing unstressed full vowels
  • Warner, N. L., McQueen, J. M., Liu, P. Z., Hoffmann, M., & Cutler, A. (2012). Timing of perception for all English diphones [Abstract]. Program abstracts from the 164th Meeting of the Acoustical Society of America published in the Journal of the Acoustical Society of America, 132(3), 1967.

    Abstract

    Information in speech does not unfold discretely over time; perceptual cues are gradient and overlapped. However, this varies greatly across segments and environments: listeners cannot identify the affricate in /ptS/ until the frication, but information about the vowel in /li/ begins early. Unlike most prior studies, which have concentrated on subsets of language sounds, this study tests perception of every English segment in every phonetic environment, sampling perceptual identification at six points in time (13,470 stimuli/listener; 20 listeners). Results show that information about consonants after another segment is most localized for affricates (almost entirely in the release), and most gradual for voiced stops. In comparison to stressed vowels, unstressed vowels have less information spreading to
    neighboring segments and are less well identified. Indeed, many vowels,
    especially lax ones, are poorly identified even by the end of the following segment. This may partly reflect listeners’ familiarity with English vowels’ dialectal variability. Diphthongs and diphthongal tense vowels show the most sudden improvement in identification, similar to affricates among the consonants, suggesting that information about segments defined by acoustic change is highly localized. This large dataset provides insights into speech perception and data for probabilistic modeling of spoken word recognition.
  • Watson, L. M., Wong, M. M. K., Vowles, J., Cowley, S. A., & Becker, E. B. E. (2018). A simplified method for generating purkinje cells from human-induced pluripotent stem cells. The Cerebellum, 17(4), 419-427. doi:10.1007/s12311-017-0913-2.

    Abstract

    The establishment of a reliable model for the study of Purkinje cells in vitro is of particular importance, given their central role in cerebellar function and pathology. Recent advances in induced pluripotent stem cell (iPSC) technology offer the opportunity to generate multiple neuronal subtypes for study in vitro. However, to date, only a handful of studies have generated Purkinje cells from human pluripotent stem cells, with most of these protocols proving challenging to reproduce. Here, we describe a simplified method for the reproducible generation of Purkinje cells from human iPSCs. After 21 days of treatment with factors selected to mimic the self-inductive properties of the isthmic organiser—insulin, fibroblast growth factor 2 (FGF2), and the transforming growth factor β (TGFβ)-receptor blocker SB431542—hiPSCs could be induced to form En1-positive cerebellar progenitors at efficiencies of up to 90%. By day 35 of differentiation, subpopulations of cells representative of the two cerebellar germinal zones, the rhombic lip (Atoh1-positive) and ventricular zone (Ptf1a-positive), could be identified, with the latter giving rise to cells positive for Purkinje cell progenitor-specific markers, including Lhx5, Kirrel2, Olig2 and Skor2. Further maturation was observed following dissociation and co-culture of these cerebellar progenitors with mouse cerebellar cells, with 10% of human cells staining positive for the Purkinje cell marker calbindin by day 70 of differentiation. This protocol, which incorporates modifications designed to enhance cell survival and maturation and improve the ease of handling, should serve to make existing models more accessible, in order to enable future advances in the field.

    Additional information

    12311_2017_913_MOESM1_ESM.docx
  • Weber, A., & Smits, R. (2003). Consonant and vowel confusion patterns by American English listeners. In M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences.

    Abstract

    This study investigated the perception of American English phonemes by native listeners. Listeners identified either the consonant or the vowel in all possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (0 dB, 8 dB, and 16 dB). Effects of syllable position, signal-to-noise ratio, and articulatory features on vowel and consonant identification are discussed. The results constitute the largest source of data that is currently available on phoneme confusion patterns of American English phonemes by native listeners.
  • Weber, A., & Smits, R. (2003). Consonant and vowel confusion patterns by American English listeners. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 1437-1440). Adelaide: Causal Productions.

    Abstract

    This study investigated the perception of American English phonemes by native listeners. Listeners identified either the consonant or the vowel in all possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signalto-noise ratios (0 dB, 8 dB, and 16 dB). Effects of syllable position, signal-to-noise ratio, and articulatory features on vowel and consonant identification are discussed. The results constitute the largest source of data that is currently available on phoneme confusion patterns of American English phonemes by native listeners.
  • Weber, A., & Scharenborg, O. (2012). Models of spoken-word recognition. Wiley Interdisciplinary Reviews: Cognitive Science, 3, 387-401. doi:10.1002/wcs.1178.

    Abstract

    All words of the languages we know are stored in the mental lexicon. Psycholinguistic models describe in which format lexical knowledge is stored and how it is accessed when needed for language use. The present article summarizes key findings in spoken-word recognition by humans and describes how models of spoken-word recognition account for them. Although current models of spoken-word recognition differ considerably in the details of implementation, there is general consensus among them on at least three aspects: multiple word candidates are activated in parallel as a word is being heard, activation of word candidates varies with the degree of match between the speech signal and stored lexical representations, and activated candidate words compete for recognition. No consensus has been reached on other aspects such as the flow of information between different processing levels, and the format of stored prelexical and lexical representations. WIREs Cogn Sci 2012
  • Weber, A., & Crocker, M. W. (2012). On the nature of semantic constraints on lexical access. Journal of Psycholinguistic Research, 41, 195-214. doi:10.1007/s10936-011-9184-0.

    Abstract

    We present two eye-tracking experiments that investigate lexical frequency and semantic context constraints in spoken-word recognition in German. In both experiments, the pivotal words were pairs of nouns overlapping at onset but varying in lexical frequency. In Experiment 1, German listeners showed an expected frequency bias towards high-frequency competitors (e.g., Blume, ‘flower’) when instructed to click on low-frequency targets (e.g., Bluse, ‘blouse’). In Experiment 2, semantically constraining context increased the availability of appropriate low-frequency target words prior to word onset, but did not influence the availability of semantically inappropriate high-frequency competitors at the same time. Immediately after target word onset, however, the activation of high-frequency competitors was reduced in semantically constraining sentences, but still exceeded that of unrelated distractor words significantly. The results suggest that (1) semantic context acts to downgrade activation of inappropriate competitors rather than to exclude them from competition, and (2) semantic context influences spoken-word recognition, over and above anticipation of upcoming referents.
  • Weber, A., & Cutler, A. (2003). Perceptual similarity co-existing with lexical dissimilarity [Abstract]. Abstracts of the 146th Meeting of the Acoustical Society of America. Journal of the Acoustical Society of America, 114(4 Pt. 2), 2422. doi:10.1121/1.1601094.

    Abstract

    The extreme case of perceptual similarity is indiscriminability, as when two second‐language phonemes map to a single native category. An example is the English had‐head vowel contrast for Dutch listeners; Dutch has just one such central vowel, transcribed [E]. We examine whether the failure to discriminate in phonetic categorization implies indiscriminability in other—e.g., lexical—processing. Eyetracking experiments show that Dutch‐native listeners instructed in English to ‘‘click on the panda’’ look (significantly more than native listeners) at a pictured pencil, suggesting that pan‐ activates their lexical representation of pencil. The reverse, however, is not the case: ‘‘click on the pencil’’ does not induce looks to a panda, suggesting that pen‐ does not activate panda in the lexicon. Thus prelexically undiscriminated second‐language distinctions can nevertheless be maintained in stored lexical representations. The problem of mapping a resulting unitary input to two distinct categories in lexical representations is solved by allowing input to activate only one second‐language category. For Dutch listeners to English, this is English [E], as a result of which no vowels in the signal ever map to words containing [ae]. We suggest that the choice of category is here motivated by a more abstract, phonemic, metric of similarity.
  • Weber, A., & Broersma, M. (2012). Spoken word recognition in second language acquisition. In C. A. Chapelle (Ed.), The encyclopedia of applied linguistics. Bognor Regis: Wiley-Blackwell. doi:10.1002/9781405198431.wbeal1104.

    Abstract

    In order to decode the message of a speaker, listeners have to recognize individual words in the speaker's utterance.
  • Weber, K. (2012). The language learning brain: Evidence from second language learning and bilingual studies of syntactic processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    Many people speak a second language next to their mother tongue. How do they learn this language and how does the brain process it compared to the native language? A second language can be learned without explicit instruction. Our brains automatically pick up grammatical structures, such as word order, when these structures are repeated frequently during learning. The learning takes place within hours or days and the same brain areas, such as frontal and temporal brain regions, that process our native language are very quickly activated. When people master a second language very well, even the same neuronal populations in these language brain areas are involved. This is especially the case when the grammatical structures are similar. In conclusion, it appears that a second language builds on the existing cognitive and neural mechanisms of the native language as much as possible.
  • Weekes, B. S., Abutalebi, J., Mak, H.-K.-F., Borsa, V., Soares, S. M. P., Chiu, P. W., & Zhang, L. (2018). Effect of monolingualism and bilingualism in the anterior cingulate cortex: a proton magnetic resonance spectroscopy study in two centers. Letras de Hoje, 53(1), 5-12. doi:10.15448/1984-7726.2018.1.30954.

    Abstract

    Reports of an advantage of bilingualism on brain structure in young adult participants
    are inconsistent. Abutalebi et al. (2012) reported more efficient monitoring of conflict during the
    Flanker task in young bilinguals compared to young monolingual speakers. The present study
    compared young adult (mean age = 24) Cantonese-English bilinguals in Hong Kong and young
    adult monolingual speakers. We expected (a) differences in metabolites in neural tissue to result
    from bilingual experience, as measured by 1H-MRS at 3T, (b) correlations between metabolic
    levels and Flanker conflict and interference effects (c) different associations in bilingual and
    monolingual speakers. We found evidence of metabolic differences in the ACC due to bilingualism,
    specifically in metabolites Cho, Cr, Glx and NAA. However, we found no significant correlations
    between metabolic levels and conflict and interference effects and no significant evidence of
    differential relationships between bilingual and monolingual speakers. Furthermore, we found no
    evidence of significant differences in the mean size of conflict and interference effects between
    groups i.e. no bilingual advantage. Lower levels of Cho, Cr, Glx and NAA in bilingual adults
    compared to monolingual adults suggest that the brains of bilinguals develop greater adaptive
    control during conflict monitoring because of their extensive bilingual experience.
  • Wegman, J., Tyborowska, A., Hoogman, M., Vasquez, A. A., & Janzen, G. (2017). The brain-derived neurotrophic factor Val66Met polymorphism affects encoding of object locations during active navigation. European Journal of Neuroscience, 45(12), 1501-1511. doi:10.1111/ejn.13416.

    Abstract

    The brain-derived neurotrophic factor (BDNF) was shown to be involved in spatial memory and spatial strategy preference. A naturally occurring single nucleotide polymorphism of the BDNF gene (Val66Met) affects activity-dependent secretion of BDNF. The current event-related fMRI study on preselected groups of ‘Met’ carriers and homozygotes of the ‘Val’ allele investigated the role of this polymorphism on encoding and retrieval in a virtual navigation task in 37 healthy volunteers. In each trial, participants navigated toward a target object. During encoding, three positional cues (columns) with directional cues (shadows) were available. During retrieval, the invisible target had to be replaced while either two objects without shadows (objects trial) or one object with a shadow (shadow trial) were available. The experiment consisted of blocks, informing participants of which trial type would be most likely to occur during retrieval. We observed no differences between genetic groups in task performance or time to complete the navigation tasks. The imaging results show that Met carriers compared to Val homozygotes activate the left hippocampus more during successful object location memory encoding. The observed effects were independent of non-significant performance differences or volumetric differences in the hippocampus. These results indicate that variations of the BDNF gene affect memory encoding during spatial navigation, suggesting that lower levels of BDNF in the hippocampus results in less efficient spatial memory processing
  • Weissenborn, J. (1988). Von der demonstratio ad oculos zur Deixis am Phantasma. Die Entwicklung der lokalen Referenz bei Kindern. In Karl Bühler's Theory of Language. Proceedings of the Conference held at Kirchberg, August 26, 1984 and Essen, November 21–24, 1984 (pp. 257-276). Amsterdam: Benjamins.
  • Wender, K. F., Haun, D. B. M., Rasch, B. H., & Blümke, M. (2003). Context effects in memory for routes. In C. Freksa, W. Brauer, C. Habel, & K. F. Wender (Eds.), Spatial cognition III: Routes and navigation, human memory and learning, spatial representation and spatial learning (pp. 209-231). Berlin: Springer.
  • Wheeldon, L. (2003). Inhibitory from priming of spoken word production. Language and Cognitive Processes, 18(1), 81-109. doi:10.1080/01690960143000470.

    Abstract

    Three experiments were designed to examine the effect on picture naming of the prior production of a word related in phonological form. In Experiment 1, the latency to produce Dutch words in response to pictures (e.g., hoed , hat) was longer following the production of a form-related word (e.g., hond , dog) in response to a definition on a preceding trial, than when the preceding definition elicited an unrelated word (e.g., kerk , church). Experiment 2 demonstrated that the inhibitory effect disappears when one unrelated word is produced intervening prime and target productions (e.g., hond-kerk-hoed ). The size of the inhibitory effect was not significantly affected by the frequency of the prime words or the target picture names. In Experiment 3, facilitation was observed for word pairs that shared offset segments (e.g., kurk-jurk , cork-dress), whereas inhibition was observed for shared onset segments (e.g., bloed-bloem , blood-flower). However, no priming was observed for prime and target words with shared phonemes but no mismatching segments (e.g., oom-boom , uncle-tree; hex-hexs , fence-witch). These findings are consistent with a process of phoneme competition during phonological encoding.
  • Whitehouse, A. J., Bishop, D. V., Ang, Q., Pennell, C. E., & Fisher, S. E. (2012). Corrigendum to CNTNAP2 variants affect early language development in the general population. Genes, Brain and Behavior, 11, 501. doi:10.1111/j.1601-183X.2012.00806.x.

    Abstract

    Corrigendum to CNTNAP2 variants affect early language development in the general population A. J. O. Whitehouse, D. V. M. Bishop, Q. W. Ang, C. E. Pennell and S. E. Fisher Genes Brain Behav (2011) doi: 10.1111/j.1601-183X.2011.00684.x. The authors have detected a typographical error in the Abstract of this paper. The error is in the fifth sentence, which reads: ‘‘On the basis of these findings, we performed analyses of four-marker haplotypes of rs2710102–rs759178–rs17236239–rs2538976 and identified significant association (haplotype TTAA, P = 0.049; haplotype GCAG,P = .0014).’’ Rather than ‘‘GCAG’’, the final haplotype should read ‘‘CGAG’’. This typographical error was made in the Abstract only and this has no bearing on the results or conclusions of the study, which remain unchanged. Reference Whitehouse, A. J. O., Bishop, D. V. M., Ang, Q. W., Pennell, C. E. & Fisher, S. E. (2011) CNTNAP2 variants affect early language development in the general population. Genes Brain Behav 10, 451–456. doi: 10.1111/j.1601-183X.2011.00684.x.
  • Whitehouse, H., & Cohen, E. (2012). Seeking a rapprochement between anthropology and the cognitive sciences: A problem-driven approach. Topics in Cognitive Science, 4, 404-412. doi:10.1111/j.1756-8765.2012.01203.x.

    Abstract

    Beller, Bender, and Medin question the necessity of including social anthropology within the cognitive sciences. We argue that there is great scope for fruitful rapprochement while agreeing that there are obstacles (even if we might wish to debate some of those specifically identified by Beller and colleagues). We frame the general problem differently, however: not in terms of the problem of reconciling disciplines and research cultures, but rather in terms of the prospects for collaborative deployment of expertise (methodological and theoretical) in problem-driven research. For the purposes of illustration, our focus in this article is on the evolution of cooperation
  • Whorf, B. L. (2012). Language, thought, and reality: selected writings of Benjamin Lee Whorf [2nd ed.]: introduction by John B. Carroll; foreword by Stephen C. Levinson. (J. B. Carroll, S. C. Levinson, & P. Lee, Eds.). Cambridge, MA: MIT Press.

    Abstract

    The pioneering linguist Benjamin Whorf (1897–1941) grasped the relationship between human language and human thinking: how language can shape our innermost thoughts. His basic thesis is that our perception of the world and our ways of thinking about it are deeply influenced by the structure of the languages we speak. The writings collected in this volume include important papers on the Maya, Hopi, and Shawnee languages, as well as more general reflections on language and meaning. Whorf’s ideas about the relation of language and thought have always appealed to a wide audience, but their reception in expert circles has alternated between dismissal and applause. Recently the language sciences have headed in directions that give Whorf’s thinking a renewed relevance. Hence this new edition of Whorf’s classic work is especially timely. The second edition includes all the writings from the first edition as well as John Carroll’s original introduction, a new foreword by Stephen Levinson of the Max Planck Institute for Psycholinguistics that puts Whorf’s work in historical and contemporary context, and new indexes. In addition, this edition offers Whorf’s “Yale Report,” an important work from Whorf’s mature oeuvre.
  • Wiese, R., Orzechowska, P., Alday, P. M., & Ulbrich, C. (2017). Structural Principles or Frequency of Use? An ERP Experiment on the Learnability of Consonant Clusters. Frontiers in Psychology, 7: 2005. doi:10.3389/fpsyg.2016.02005.

    Abstract

    Phonological knowledge of a language involves knowledge about which segments can be combined under what conditions. Languages vary in the quantity and quality of licensed combinations, in particular sequences of consonants, with Polish being a language with a large inventory of such combinations. The present paper reports on a two-session experiment in which Polish-speaking adult participants learned nonce words with final consonant clusters. The aim was to study the role of two factors which potentially play a role in the learning of phonotactic structures: the phonological principle of sonority (ordering sound segments within the syllable according to their inherent loudness) and the (non-) existence as a usage-based phenomenon. EEG responses in two different time windows (adversely to behavioral responses) show linguistic processing by native speakers of Polish to be sensitive to both distinctions, in spite of the fact that Polish is rich in sonority-violating clusters. In particular, a general learning effect in terms of an N400 effect was found which was demonstrated to be different for sonority-obeying clusters than for sonority-violating clusters. Furthermore, significant interactions of formedness and session, and of existence and session, demonstrate that both factors, the sonority principle and the frequency pattern, play a role in the learning process.
  • Willems, R. M., & Francken, J. C. (2012). Embodied cognition: Taking the next step. Frontiers in Psychology, 3, 582. doi:10.3389/fpsyg.2012.00582.

    Abstract

    Recent years have seen a large amount of empirical studies related to ‘embodied cognition’. While interesting and valuable, there is something dissatisfying with the current state of affairs in this research domain. Hypotheses tend to be underspecified, testing in general terms for embodied versus disembodied processing. The lack of specificity of current hypotheses can easily lead to an erosion of the embodiment concept, and result in a situation in which essentially any effect is taken as positive evidence. Such erosion is not helpful to the field and does not do justice to the importance of embodiment. Here we want to take stock, and formulate directions for how it can be studied in a more fruitful fashion. As an example we will describe few example studies that have investigated the role of sensori-motor systems in the coding of meaning (‘embodied semantics’). Instead of focusing on the dichotomy between embodied and disembodied theories, we suggest that the field move forward and ask how and when sensori-motor systems and behavior are involved in cognition.
  • Willems, R. M., & Cristia, A. (2018). Hemodynamic methods: fMRI and fNIRS. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 266-287). Hoboken: Wiley.
  • Willems, R. M., & Van Gerven, M. (2018). New fMRI methods for the study of language. In S.-A. Rueschemeyer, & M. G. Gaskell (Eds.), The Oxford Handbook of Psycholinguistics (2nd ed., pp. 975-991). Oxford: Oxford University Press.
  • Windhouwer, M., Broeder, D., & Van Uytvanck, D. (2012). A CMD core model for CLARIN web services. In Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 41-48).

    Abstract

    In the CLARIN infrastructure various national projects have started initiatives to allow users of the infrastructure to create chains or workflows of web services. The Component Metadata (CMD) core model for web services described in this paper tries to align the metadata descriptions of these various initiatives. This should allow chaining/workflow engines to find matching and invoke services. The paper describes the landscape of web services architectures and the state of the national initiatives. Based on this a CMD core model for CLARIN is proposed, which, within some limits, can be adapted to the specific needs of an initiative by the standard facilities of CMD. The paper closes with the current state and usage of the model and a look into the future.
  • Windhouwer, M., & Wright, S. E. (2012). Linking to linguistic data categories in ISOcat. In C. Chiarcos, S. Nordhoff, & S. Hellmann (Eds.), Linked data in linguistics: Representing and connecting language data and language metadata (pp. 99-107). Berlin: Springer.

    Abstract

    ISO Technical Committee 37, Terminology and other language and content resources, established an ISO 12620:2009 based Data Category Registry (DCR), called ISOcat (see http://www.isocat.org), to foster semantic interoperability of linguistic resources. However, this goal can only be met if the data categories are reused by a wide variety of linguistic resource types. A resource indicates its usage of data categories by linking to them. The small DC Reference XML vocabulary is used to embed links to data categories in XML documents. The link is established by an URI, which servers as the Persistent IDentifier (PID) of a data category. This paper discusses the efforts to mimic the same approach for RDF-based resources. It also introduces the RDF quad store based Relation Registry RELcat, which enables ontological relationships between data categories not supported by ISOcat and thus adds an extra level of linguistic knowledge.
  • Windhouwer, M. (2012). RELcat: a Relation Registry for ISOcat data categories. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 3661-3664). European Language Resources Association (ELRA).

    Abstract

    The ISOcat Data Category Registry contains basically a flat and easily extensible list of data category specifications. To foster reuse and standardization only very shallow relationships among data categories are stored in the registry. However, to assist crosswalks, possibly based on personal views, between various (application) domains and to overcome possible proliferation of data categories more types of ontological relationships need to be specified. RELcat is a first prototype of a Relation Registry, which allows storing arbitrary relationships. These relationships can reflect the personal view of one linguist or a larger community. The basis of the registry is a relation type taxonomy that can easily be extended. This allows on one hand to load existing sets of relations specified in, for example, an OWL (2) ontology or SKOS taxonomy. And on the other hand allows algorithms that query the registry to traverse the stored semantic network to remain ignorant of the original source vocabulary. This paper describes first experiences with RELcat and explains some initial design decisions.
  • Windhouwer, M. (2012). Towards standardized descriptions of linguistic features: ISOcat and procedures for using common data categories. In J. Jancsary (Ed.), Proceedings of the Conference on Natural Language Processing 2012, (SFLR 2012 workshop), September 19-21, 2012, Vienna (pp. 494). Vienna: Österreichischen Gesellschaft für Artificial Intelligende (ÖGAI).

    Abstract

    Automatic Language Identification of written texts is a well-established area of research in Computational Linguistics. State-of-the-art algorithms often rely on n-gram character models to identify the correct language of texts, with good results seen for European languages. In this paper we propose the use of a character n-gram model and a word n-gram language model for the automatic classification of two written varieties of Portuguese: European and Brazilian. Results reached 0.998 for accuracy using character 4-grams.
  • Winsvold, B. S., Palta, P., Eising, E., Page, C. M., The International Headache Genetics Consortium, Van den Maagdenberg, A. M. J. M., Palotie, A., & Zwart, J.-A. (2018). Epigenetic DNA methylation changes associated with headache chronification: A retrospective case-control study. Cephalalgia, 38(2), 312-322. doi:10.1177/0333102417690111.

    Abstract

    Background

    The biological mechanisms of headache chronification are poorly understood. We aimed to identify changes in DNA methylation associated with the transformation from episodic to chronic headache.
    Methods

    Participants were recruited from the population-based Norwegian HUNT Study. Thirty-six female headache patients who transformed from episodic to chronic headache between baseline and follow-up 11 years later were matched against 35 controls with episodic headache. DNA methylation was quantified at 485,000 CpG sites, and changes in methylation level at these sites were compared between cases and controls by linear regression analysis. Data were analyzed in two stages (Stages 1 and 2) and in a combined meta-analysis.
    Results

    None of the top 20 CpG sites identified in Stage 1 replicated in Stage 2 after multiple testing correction. In the combined meta-analysis the strongest associated CpG sites were related to SH2D5 and NPTX2, two brain-expressed genes involved in the regulation of synaptic plasticity. Functional enrichment analysis pointed to processes including calcium ion binding and estrogen receptor pathways.
    Conclusion

    In this first genome-wide study of DNA methylation in headache chronification several potentially implicated loci and processes were identified. The study exemplifies the use of prospectively collected population cohorts to search for epigenetic mechanisms of disease
  • Winter, B., Perlman, M., & Majid, A. (2018). Vision dominates in perceptual language: English sensory vocabulary is optimized for usage. Cognition, 179, 213-220. doi:10.1016/j.cognition.2018.05.008.

    Abstract

    Researchers have suggested that the vocabularies of languages are oriented towards the communicative needs of language users. Here, we provide evidence demonstrating that the higher frequency of visual words in a large variety of English corpora is reflected in greater lexical differentiation—a greater number of unique words—for the visual domain in the English lexicon. In comparison, sensory modalities that are less frequently talked about, particularly taste and smell, show less lexical differentiation. In addition, we show that even though sensory language can be expected to change across historical time and between contexts of use (e.g., spoken language versus fiction), the pattern of visual dominance is a stable property of the English language. Thus, we show that across the board, precisely those semantic domains that are more frequently talked about are also more lexically differentiated, for perceptual experiences. This correlation between type and token frequencies suggests that the sensory lexicon of English is geared towards communicative efficiency.
  • Withers, P. (2012). Metadata management with Arbil. In V. Arranz, D. Broeder, B. Gaiffe, M. Gavrilidou, & M. Monachini (Eds.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 72-75). European Language Resources Association (ELRA).

    Abstract

    Arbil is an application designed to create and manage metadata for research data and to arrange this data into a structure appropriate for archiving. The metadata is displayed in tables, which allows an overview of the metadata and the ability to populate and update many metadata sections in bulk. Both IMDI and Clarin metadata formats are supported and Arbil has been designed as a local application so that it can also be used offline, for instance in remote field sites. The metadata can be entered in any order or at any stage that the user is able; once the metadata and its data are ready for archiving and an Internet connection is available it can be exported from Arbil and in the case of IMDI it can then be transferred to the main archive via LAMUS (archive management and upload system).
  • Wittenburg, P. (2003). The DOBES model of language documentation. Language Documentation and Description, 1, 122-139.
  • Wittenburg, P., Lenkiewicz, P., Auer, E., Gebre, B. G., Lenkiewicz, A., & Drude, S. (2012). AV Processing in eHumanities - a paradigm shift. In J. C. Meister (Ed.), Digital Humanities 2012 Conference Abstracts. University of Hamburg, Germany; July 16–22, 2012 (pp. 538-541).

    Abstract

    Introduction Speech research saw a dramatic change in paradigm in the 90-ies. While earlier the discussion was dominated by a phoneticians’ approach who knew about phenomena in the speech signal, the situation completely changed after stochastic machinery such as Hidden Markov Models [1] and Artificial Neural Networks [2] had been introduced. Speech processing was now dominated by a purely mathematic approach that basically ignored all existing knowledge about the speech production process and the perception mechanisms. The key was now to construct a large enough training set that would allow identifying the many free parameters of such stochastic engines. In case that the training set is representative and the annotations of the training sets are widely ‘correct’ we could assume to get a satisfyingly functioning recognizer. While the success of knowledge-based systems such as Hearsay II [3] was limited, the statistically based approach led to great improvements in recognition rates and to industrial applications.
  • Wittenburg, P., Drude, S., & Broeder, D. (2012). Psycholinguistik. In H. Neuroth, S. Strathmann, A. Oßwald, R. Scheffel, J. Klump, & J. Ludwig (Eds.), Langzeitarchivierung von Forschungsdaten. Eine Bestandsaufnahme (pp. 83-108). Boizenburg: Verlag Werner Hülsbusch.

    Abstract

    5.1 Einführung in den Forschungsbereich Die Psycholinguistik ist der Bereich der Linguistik, der sich mit dem Zusammenhang zwischen menschlicher Sprache und dem Denken und anderen mentalen Prozessen beschäftigt, d.h. sie stellt sich einer Reihe von essentiellen Fragen wie etwa (1) Wie schafft es unser Gehirn, im Wesentlichen akustische und visuelle kommunikative Informationen zu verstehen und in mentale Repräsentationen umzusetzen? (2) Wie kann unser Gehirn einen komplexen Sachverhalt, den wir anderen übermitteln wollen, in eine von anderen verarbeitbare Sequenz von verbalen und nonverbalen Aktionen umsetzen? (3) Wie gelingt es uns, in den verschiedenen Phasen des Lebens Sprachen zu erlernen? (4) Sind die kognitiven Prozesse der Sprachverarbeitung universell, obwohl die Sprachsysteme derart unterschiedlich sind, dass sich in den Strukturen kaum Universalien finden lassen?
  • Wnuk, E., De Valk, J. M., Huisman, J. L. A., & Majid, A. (2017). Hot and cold smells: Odor-temperature associations across cultures. Frontiers in Psychology, 8: 1373. doi:10.3389/fpsyg.2017.01373.

    Abstract

    It is often assumed odors are associated with hot and cold temperature, since odor processing may trigger thermal sensations, such as coolness in the case of mint. It is unknown, however, whether people make consistent temperature associations for a variety of everyday odors, and, if so, what determines them. Previous work investigating the bases of cross-modal associations suggests a number of possibilities, including universal forces (e.g., perception), as well as culture-specific forces (e.g., language and cultural beliefs). In this study, we examined odor-temperature associations in three cultures—Maniq (N = 11), Thai (N = 24), and Dutch (N = 24)—who differ with respect to their cultural preoccupation with odors, their odor lexicons, and their beliefs about the relationship of odors (and odor objects) to temperature. Participants matched 15 odors to temperature by touching cups filled with hot or cold water, and described the odors in their native language. The results showed no consistent associations among the Maniq, and only a handful of consistent associations between odor and temperature among the Thai and Dutch. The consistent associations differed across the two groups, arguing against their universality. Further analysis revealed cross-modal associations could not be explained by language, but could be the result of cultural beliefs
  • Wnuk, E., & Majid, A. (2012). Olfaction in a hunter-gatherer society: Insights from language and culture. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 1155-1160). Austin, TX: Cognitive Science Society.

    Abstract

    According to a widely-held view among various scholars, olfaction is inferior to other human senses. It is also believed by many that languages do not have words for describing smells. Data collected among the Maniq, a small population of nomadic foragers in southern Thailand, challenge the above claims and point to a great linguistic and cultural elaboration of odor. This article presents evidence of the importance of olfaction in indigenous rituals and beliefs, as well as in the lexicon. The results demonstrate the richness and complexity of the domain of smell in Maniq society and thereby challenge the universal paucity of olfactory terms and insignificance of olfaction for humans.
  • Wong, M. M. K., Hoekstra, S. D., Vowles, J., Watson, L. M., Fuller, G., Németh, A. H., Cowley, S. A., Ansorge, O., Talbot, K., & Becker, E. B. E. (2018). Neurodegeneration in SCA14 is associated with increased PKCγ kinase activity, mislocalization and aggregation. Acta Neuropathologica Communications, 6: 99. doi:10.1186/s40478-018-0600-7.

    Abstract

    Spinocerebellar ataxia type 14 (SCA14) is a subtype of the autosomal dominant cerebellar ataxias that is characterized by slowly progressive cerebellar dysfunction and neurodegeneration. SCA14 is caused by mutations in the PRKCG gene, encoding protein kinase C gamma (PKCγ). Despite the identification of 40 distinct disease-causing mutations in PRKCG, the pathological mechanisms underlying SCA14 remain poorly understood. Here we report the molecular neuropathology of SCA14 in post-mortem cerebellum and in human patient-derived induced pluripotent stem cells (iPSCs) carrying two distinct SCA14 mutations in the C1 domain of PKCγ, H36R and H101Q. We show that endogenous expression of these mutations results in the cytoplasmic mislocalization and aggregation of PKCγ in both patient iPSCs and cerebellum. PKCγ aggregates were not efficiently targeted for degradation. Moreover, mutant PKCγ was found to be hyper-activated, resulting in increased substrate phosphorylation. Together, our findings demonstrate that a combination of both, loss-of-function and gain-of-function mechanisms are likely to underlie the pathogenesis of SCA14, caused by mutations in the C1 domain of PKCγ. Importantly, SCA14 patient iPSCs were found to accurately recapitulate pathological features observed in post-mortem SCA14 cerebellum, underscoring their potential as relevant disease models and their promise as future drug discovery tools.

    Additional information

    additional file
  • Wong, M. M. K., Watson, L. M., & Becker, E. B. E. (2017). Recent advances in modelling of cerebellar ataxia using induced pluripotent stem cells. Journal of Neurology & Neuromedicine, 2(7), 11-15. doi:10.29245/2572.942X/2017/7.1134.

    Abstract

    The cerebellar ataxias are a group of incurable brain disorders that are caused primarily by the progressive dysfunction and degeneration of cerebellar Purkinje cells. The lack of reliable disease models for the heterogeneous ataxias has hindered the understanding of the underlying pathogenic mechanisms as well as the development of effective therapies for these devastating diseases. Recent advances in the field of induced pluripotent stem cell (iPSC) technology offer new possibilities to better understand and potentially reverse disease pathology. Given the neurodevelopmental phenotypes observed in several types of ataxias, iPSC-based models have the potential to provide significant insights into disease progression, as well as opportunities for the development of early intervention therapies. To date, however, very few studies have successfully used iPSC-derived cells to cerebellar ataxias. In this review, we focus on recent breakthroughs in generating human iPSC-derived Purkinje cells. We also highlight the future challenges that will need to be addressed in order to fully exploit these models for the modelling of the molecular mechanisms underlying cerebellar ataxias and the development of effective therapeutics.
  • Xiang, H., Dediu, D., Roberts, L., Van Oort, E., Norris, D., & Hagoort, P. (2012). The structural connectivity underpinning language aptitude, working memory and IQ in the perisylvian language network. Language Learning, 62(Supplement S2), 110-130. doi:10.1111/j.1467-9922.2012.00708.x.

    Abstract

    We carried out the first study on the relationship between individual language aptitude and structural connectivity of language pathways in the adult brain. We measured four components of language aptitude (vocabulary learning, VocL; sound recognition, SndRec; sound-symbol correspondence, SndSym; and grammatical inferencing, GrInf) using the LLAMA language aptitude test (Meara, 2005). Spatial working memory (SWM), verbal working memory (VWM) and IQ were also measured as control factors. Diffusion Tensor Imaging (DTI) was employed to investigate the structural connectivity of language pathways in the perisylvian language network. Principal Component Analysis (PCA) on behavioural measures suggests that a general ability might be important to the first stages of L2 acquisition. It also suggested that VocL, SndSy and SWM are more closely related to general IQ than SndRec and VocL, and distinguished the tasks specifically designed to tap into L2 acquisition (VocL, SndRec,SndSym and GrInf) from more generic measures (IQ, SWM and VWM). Regression analysis suggested significant correlations between most of these behavioural measures and the structural connectivity of certain language pathways, i.e., VocL and BA47-Parietal pathway, SndSym and inter-hemispheric BA45 pathway, GrInf and BA45-Temporal pathway and BA6-Temporal pathway, IQ and BA44-Parietal pathway, BA47-Parietal pathway, BA47-Temporal pathway and inter-hemispheric BA45 pathway, SWM and inter-hemispheric BA6 pathway and BA47-Parietal pathway, and VWM and BA47-Temporal pathway. These results are discussed in relation to relevant findings in the literature.
  • Xiang, H. (2012). The language networks of the brain. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    In recent decades, neuroimaging studies on the neural infrastructure of language are usually (or mostly) conducted with certain on-line language processing tasks. These functional neuroimaging studies helped to localize the language areas in the brain and to investigate the brain activity during explicit language processing. However, little is known about what is going on with the language areas when the brain is ‘at rest’, i.e., when there is no explicit language processing running. Taking advantage of the fcMRI and DTI techniques, this thesis is able to investigate the language function ‘off-line’ at the neuronal network level and the connectivity among language areas in the brain. Based on patient studies, the traditional, classical model on the perisylvian language network specifies a “Broca’ area – Arcuate Fasciculus – Werinicke’s area” loop (Ojemann 1991). With the help of modern neuroimaging techniques, researchers have been able to track language pathways that involve more brain structures than are in the classical model, and relate them to certain language functions. In such a background, a large part of this thesis made a contribution to the study of the topology of the language networks. It revealed that the language networks form a topographical functional connectivity pattern in the left hemisphere for the right-handers. This thesis also revealed the importance of structural hubs, such as Broca’s and Wernicke’s areas, which have more connectivity to other brain areas and play a central role in the language networks. Furthermore, this thesis revealed both functionally and structurally lateralized language networks in the brain. The consistency between what is found in this thesis and what has been known from previous functional studies seems to suggest, that the human brain is optimized and ‘ready’ for the language function even when there is currently no explicit language-processing running.
  • Yager, J., & Burenhult, N. (2017). Jedek: a newly discovered Aslian variety of Malaysia. Linguistic Typology, 21(3), 493-545. doi:10.1515/lingty-2017-0012.

    Abstract

    Jedek is a previously unrecognized variety of the Northern Aslian subgroup of the Aslian branch of the Austroasiatic language family. It is spoken by c. 280 individuals in the resettlement area of Sungai Rual, near Jeli in Kelantan state, Peninsular Malaysia. The community originally consisted of several bands of foragers along the middle reaches of the Pergau river. Jedek’s distinct status first became known during a linguistic survey carried out in the DOBES project Tongues of the Semang (2005-2011). This paper describes the process leading up to its discovery and provides an overview of its typological characteristics.
  • Yang, J., Zhu, H., & Tian, X. (2018). Group-level multivariate analysis in EasyEEG toolbox: Examining the temporal dynamics using topographic responses. Frontiers in Neuroscience, 12: 468. doi:10.3389/fnins.2018.00468.

    Abstract

    Electroencephalography (EEG) provides high temporal resolution cognitive information from non-invasive recordings. However, one of the common practices-using a subset of sensors in ERP analysis is hard to provide a holistic and precise dynamic results. Selecting or grouping subsets of sensors may also be subject to selection bias, multiple comparison, and further complicated by individual differences in the group-level analysis. More importantly, changes in neural generators and variations in response magnitude from the same neural sources are difficult to separate, which limit the capacity of testing different aspects of cognitive hypotheses. We introduce EasyEEG, a toolbox that includes several multivariate analysis methods to directly test cognitive hypotheses based on topographic responses that include data from all sensors. These multivariate methods can investigate effects in the dimensions of response magnitude and topographic patterns separately using data in the sensor space, therefore enable assessing neural response dynamics. The concise workflow and the modular design provide user-friendly and programmer-friendly features. Users of all levels can benefit from the open-sourced, free EasyEEG to obtain a straightforward solution for efficient processing of EEG data and a complete pipeline from raw data to final results for publication.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., & Hino, Y. (2017). The phonological unit of Japanese Kanji compounds: A masked priming investigation. Journal of Experimental Psychology: Human Perception and Performance, 43(7), 1303-1328. doi:10.1037/xhp0000374.

    Abstract

    Using the masked priming paradigm, we examined which phonological unit is used when naming Kanji compounds. Although the phonological unit in the Japanese language has been suggested to be the mora, Experiment 1 found no priming for mora-related Kanji prime-target pairs. In Experiment 2, significant priming was only found when Kanji pairs shared the whole sound of their initial Kanji characters. Nevertheless, when the same Kanji pairs used in Experiment 2 were transcribed into Kana, significant mora priming was observed in Experiment 3. In Experiment 4, matching the syllable structure and pitch-accent of the initial Kanji characters did not lead to mora priming, ruling out potential alternative explanations for the earlier absence of the effect. A significant mora priming effect was observed, however, when the shared initial mora constituted the whole sound of their initial Kanji characters in Experiments 5. Lastly, these results were replicated in Experiment 6. Overall, these results indicate that the phonological unit involved when naming Kanji compounds is not the mora but the whole sound of each Kanji character. We discuss how different phonological units may be involved when processing Kanji and Kana words as well as the implications for theories dealing with language production processes.
  • You, W., Zhang, Q., & Verdonschot, R. G. (2012). Masked syllable priming effects in word and picture naming in Chinese. PLoS One, 7(10): e46595. doi:10.1371/journal.pone.0046595.

    Abstract

    Four experiments investigated the role of the syllable in Chinese spoken word production. Chen, Chen and Ferrand (2003) reported a syllable priming effect when primes and targets shared the first syllable using a masked priming paradigm in Chinese. Our Experiment 1 was a direct replication of Chen et al.'s (2003) Experiment 3 employing CV (e. g., /ba2.ying2/, strike camp) and CVG (e. g., /bai2.shou3/, white haired) syllable types. Experiment 2 tested the syllable priming effect using different syllable types: e. g., CV (/qi4.qiu2/, balloon) and CVN (/qing1.ting2/, dragonfly). Experiment 3 investigated this issue further using line drawings of common objects as targets that were preceded either by a CV (e. g., /qi3/, attempt), or a CVN (e. g., /qing2/, affection) prime. Experiment 4 further examined the priming effect by a comparison between CV or CVN priming and an unrelated priming condition using CV-NX (e. g., /mi2.ni3/, mini) and CVN-CX (e. g., /min2.ju1/, dwellings) as target words. These four experiments consistently found that CV targets were named faster when preceded by CV primes than when they were preceded by CVG, CVN or unrelated primes, whereas CVG or CVN targets showed the reverse pattern. These results indicate that the priming effect critically depends on the match between the structure of the prime and that of the first syllable of the target. The effect obtained in this study was consistent across different stimuli and different tasks (word and picture naming), and provides more conclusive and consistent data regarding the role of the syllable in Chinese speech production.
  • Zampieri, M., & Gebre, B. G. (2012). Automatic identification of language varieties: The case of Portuguese. In J. Jancsary (Ed.), Proceedings of the Conference on Natural Language Processing 2012, September 19-21, 2012, Vienna (pp. 233-237). Vienna: Österreichischen Gesellschaft für Artificial Intelligende (ÖGAI).

    Abstract

    Automatic Language Identification of written texts is a well-established area of research in Computational Linguistics. State-of-the-art algorithms often rely on n-gram character models to identify the correct language of texts, with good results seen for European languages. In this paper we propose the use of a character n-gram model and a word n-gram language model for the automatic classification of two written varieties of Portuguese: European and Brazilian. Results reached 0.998 for accuracy using character 4-grams.
  • Zampieri, M., Gebre, B. G., & Diwersy, S. (2012). Classifying pluricentric languages: Extending the monolingual model. In Proceedings of SLTC 2012. The Fourth Swedish Language Technology Conference. Lund, October 24-26, 2012 (pp. 79-80). Lund University.

    Abstract

    This study presents a new language identification model for pluricentric languages that uses n-gram language models at the character and word level. The model is evaluated in two steps. The first step consists of the identification of two varieties of Spanish (Argentina and Spain) and two varieties of French (Quebec and France) evaluated independently in binary classification schemes. The second step integrates these language models in a six-class classification with two Portuguese varieties.
  • Zeshan, U. (2003). Aspects of Türk Işaret Dili (Turkish Sign Language). Sign Language and Linguistics, 6(1), 43-75. doi:10.1075/sll.6.1.04zes.

    Abstract

    This article provides a first overview of some striking grammatical structures in Türk Idotscedilaret Dili (Turkish Sign Language, TID), the sign language used by the Deaf community in Turkey. The data are described with a typological perspective in mind, focusing on aspects of TID grammar that are typologically unusual across sign languages. After giving an overview of the historical, sociolinguistic and educational background of TID and the language community using this sign language, five domains of TID grammar are investigated in detail. These include a movement derivation signalling completive aspect, three types of nonmanual negation — headshake, backward head tilt, and puffed cheeks — and their distribution, cliticization of the negator NOT to a preceding predicate host sign, an honorific whole-entity classifier used to refer to humans, and a question particle, its history and current status in the language. A final evaluation points out the significance of these data for sign language research and looks at perspectives for a deeper understanding of the language and its history.
  • Zeshan, U., & De Vos, C. (Eds.). (2012). Sign languages in village communities: Anthropological and linguistic insights. Berlin: Mouton de Gruyter.

    Abstract

    The book is a unique collection of research on sign languages that have emerged in rural communities with a high incidence of, often hereditary, deafness. These sign languages represent the latest addition to the comparative investigation of languages in the gestural modality, and the book is the first compilation of a substantial number of different "village sign languages".Written by leading experts in the field, the volume uniquely combines anthropological and linguistic insights, looking at both the social dynamics and the linguistic structures in these village communities. The book includes primary data from eleven different signing communities across the world, including results from Jamaica, India, Turkey, Thailand, and Bali. All known village sign languages are endangered, usually because of pressure from larger urban sign languages, and some have died out already. Ironically, it is often the success of the larger sign language communities in urban centres, their recognition and subsequent spread, which leads to the endangerment of these small minority sign languages. The book addresses this specific type of language endangerment, documentation strategies, and other ethical issues pertaining to these sign languages on the basis of first-hand experiences by Deaf fieldworkers
  • Zhang, Y., & Yu, C. (2017). How misleading cues influence referential uncertainty in statistical cross-situational learning. In M. LaMendola, & J. Scott (Eds.), Proceedings of the 41st Annual Boston University Conference on Language Development (BUCLD 41) (pp. 820-833). Boston, MA: Cascadilla Press.
  • Zhen, Z., Kong, X., Huang, L., Yang, Z., Wang, X., Hao, X., Huang, T., Song, Y., & Liu, J. (2017). Quantifying the variability of scene-selective regions: Interindividual, interhemispheric, and sex differences. Human Brain Mapping, 38(4), 2260-2275. doi:10.1002/hbm.23519.

    Abstract

    Scene-selective regions (SSRs), including the parahippocampal place area (PPA), retrosplenial cortex (RSC), and transverse occipital sulcus (TOS), are among the most widely characterized functional regions in the human brain. However, previous studies have mostly focused on the commonality within each SSR, providing little information on different aspects of their variability. In a large group of healthy adults (N = 202), we used functional magnetic resonance imaging to investigate different aspects of topographical and functional variability within SSRs, including interindividual, interhemispheric, and sex differences. First, the PPA, RSC, and TOS were delineated manually for each individual. We then demonstrated that SSRs showed substantial interindividual variability in both spatial topography and functional selectivity. We further identified consistent interhemispheric differences in the spatial topography of all three SSRs, but distinct interhemispheric differences in scene selectivity. Moreover, we found that all three SSRs showed stronger scene selectivity in men than in women. In summary, our work thoroughly characterized the interindividual, interhemispheric, and sex variability of the SSRs and invites future work on the origin and functional significance of these variabilities. Additionally, we constructed the first probabilistic atlases for the SSRs, which provide the detailed anatomical reference for further investigations of the scene network.
  • Zheng, X., Roelofs, A., Farquhar, J., & Lemhöfer, K. (2018). Monitoring of language selection errors in switching: Not all about conflict. PLoS One, 13(11): e0200397. doi:10.1371/journal.pone.0200397.

    Abstract

    Although bilingual speakers are very good at selectively using one language rather than another, sometimes language selection errors occur. To investigate how bilinguals monitor their speech errors and control their languages in use, we recorded event-related potentials (ERPs) in unbalanced Dutch-English bilingual speakers in a cued language-switching task. We tested the conflict-based monitoring model of Nozari and colleagues by investigating the error-related negativity (ERN) and comparing the effects of the two switching directions (i.e., to the first language, L1 vs. to the second language, L2). Results show that the speakers made more language selection errors when switching from their L2 to the L1 than vice versa. In the EEG, we observed a robust ERN effect following language selection errors compared to correct responses, reflecting monitoring of speech errors. Most interestingly, the ERN effect was enlarged when the speakers were switching to their L2 (less conflict) compared to switching to the L1 (more conflict). Our findings do not support the conflict-based monitoring model. We discuss an alternative account in terms of error prediction and reinforcement learning.
  • Zheng, X., Roelofs, A., & Lemhöfer, K. (2018). Language selection errors in switching: language priming or cognitive control? Language, Cognition and Neuroscience, 33(2), 139-147. doi:10.1080/23273798.2017.1363401.

    Abstract

    Although bilingual speakers are very good at selectively using one language rather than another, sometimes language selection errors occur. We examined the relative contribution of top-down cognitive control and bottom-up language priming to these errors. Unbalanced Dutch-English bilinguals named pictures and were cued to switch between languages under time pressure. We also manipulated the number of same-language trials before a switch (long vs. short runs). Results show that speakers made more language selection errors when switching from their second language (L2) to the first language (L1) than vice versa. Furthermore, they made more errors when switching to the L1 after a short compared to a long run of L2 trials. In the reverse switching direction (L1 to L2), run length had no effect. These findings are most compatible with an account of language selection errors that assigns a strong role to top-down processes of cognitive control.

    Additional information

    plcp_a_1363401_sm2537.docx
  • Zhu, Z., Hagoort, P., Zhang, J. X., Feng, G., Chen, H.-C., Bastiaansen, M. C. M., & Wang, S. (2012). The anterior left inferior frontal gyrus contributes to semantic unification. NeuroImage, 60, 2230-2237. doi:10.1016/j.neuroimage.2012.02.036.

    Abstract

    Semantic unification, the process by which small blocks of semantic information are combined into a coherent utterance, has been studied with various types of tasks. However, whether the brain activations reported in these studies are attributed to semantic unification per se or to other task-induced concomitant processes still remains unclear. The neural basis for semantic unification in sentence comprehension was examined using event-related potentials (ERP) and functional Magnetic Resonance Imaging (fMRI). The semantic unification load was manipulated by varying the goodness of fit between a critical word and its preceding context (in high cloze, low cloze and violation sentences). The sentences were presented in a serial visual presentation mode. The participants were asked to perform one of three tasks: semantic congruency judgment (SEM), silent reading for comprehension (READ), or font size judgment (FONT), in separate sessions. The ERP results showed a similar N400 amplitude modulation by the semantic unification load across all of the three tasks. The brain activations associated with the semantic unification load were found in the anterior left inferior frontal gyrus (aLIFG) in the FONT task and in a widespread set of regions in the other two tasks. These results suggest that the aLIFG activation reflects a semantic unification, which is different from other brain activations that may reflect task-specific strategic processing.

    Additional information

    Zhu_2012_suppl.dot
  • Zoefel, B., Ten Oever, S., & Sack, A. T. (2018). The involvement of endogenous neural oscillations in the processing of rhythmic input: More than a regular repetition of evoked neural responses. Frontiers in Neuroscience, 12: 95. doi:10.3389/fnins.2018.00095.

    Abstract

    It is undisputed that presenting a rhythmic stimulus leads to a measurable brain response that follows the rhythmic structure of this stimulus. What is still debated, however, is the question whether this brain response exclusively reflects a regular repetition of evoked responses, or whether it also includes entrained oscillatory activity. Here we systematically present evidence in favor of an involvement of entrained neural oscillations in the processing of rhythmic input while critically pointing out which questions still need to be addressed before this evidence could be considered conclusive. In this context, we also explicitly discuss the potential functional role of such entrained oscillations, suggesting that these stimulus-aligned oscillations reflect, and serve as, predictive processes, an idea often only implicitly assumed in the literature.
  • De Zubicaray, G., & Fisher, S. E. (Eds.). (2017). Genes, brain and language [Special Issue]. Brain and Language, 172.
  • De Zubicaray, G., & Fisher, S. E. (2017). Genes, Brain, and Language: A brief introduction to the Special Issue. Brain and Language, 172, 1-2. doi:10.1016/j.bandl.2017.08.003.
  • Zwaan, R. A., Van der Stoep, N., Guadalupe, T., & Bouwmeester, S. (2012). Language comprehension in the balance: The robustness of the action-compatibility effect (ACE). PLoS One, 7(2), e31204. doi:10.1371/journal.pone.0031204.

    Abstract

    How does language comprehension interact with motor activity? We investigated the conditions under which comprehending an action sentence affects people's balance. We performed two experiments to assess whether sentences describing forward or backward movement modulate the lateral movements made by subjects who made sensibility judgments about the sentences. In one experiment subjects were standing on a balance board and in the other they were seated on a balance board that was mounted on a chair. This allowed us to investigate whether the action compatibility effect (ACE) is robust and persists in the face of salient incompatibilities between sentence content and subject movement. Growth-curve analysis of the movement trajectories produced by the subjects in response to the sentences suggests that the ACE is indeed robust. Sentence content influenced movement trajectory despite salient inconsistencies between implied and actual movement. These results are interpreted in the context of the current discussion of embodied, or grounded, language comprehension and meaning representation.
  • Zwitserlood, I. (2012). Classifiers. In R. Pfau, M. Steinbach, & B. Woll (Eds.), Sign Language: an International Handbook (pp. 158-186). Berlin: Mouton de Gruyter.

    Abstract

    Classifiers (currently also called 'depicting handshapes'), are observed in almost all signed languages studied to date and form a well-researched topic in sign language linguistics. Yet, these elements are still subject to much debate with respect to a variety of matters. Several different categories of classifiers have been posited on the basis of their semantics and the linguistic context in which they occur. The function(s) of classifiers are not fully clear yet. Similarly, there are differing opinions regarding their structure and the structure of the signs in which they appear. Partly as a result of comparison to classifiers in spoken languages, the term 'classifier' itself is under debate. In contrast to these disagreements, most studies on the acquisition of classifier constructions seem to consent that these are difficult to master for Deaf children. This article presents and discusses all these issues from the viewpoint that classifiers are linguistic elements.
  • Zwitserlood, I. (2003). Classifying hand configurations in Nederlandse Gebarentaal (Sign Language of the Netherlands). PhD Thesis, LOT, Utrecht. Retrieved from http://igitur-archive.library.uu.nl/dissertations/2003-0717-122837/UUindex.html.

    Abstract

    This study investigates the morphological and morphosyntactic characteristics of hand configurations in signs, particularly in Nederlandse Gebarentaal (NGT). The literature on sign languages in general acknowledges that hand configurations can function as morphemes, more specifically as classifiers , in a subset of signs: verbs expressing the motion, location, and existence of referents (VELMs). These verbs are considered the output of productive sign formation processes. In contrast, other signs in which similar hand configurations appear ( iconic or motivated signs) have been considered to be lexicalized signs, not involving productive processes. This research report shows that meaningful hand configurations have (at least) two very different functions in the grammar of NGT (and presumably in other sign languages, too). First, they are agreement markers on VELMs, and hence are functional elements. Second, they are roots in motivated signs, and thus lexical elements. The latter signs are analysed as root compounds and are formed from various roots by productive processes. The similarities in surface form and differences in morphosyntactic characteristics observed in comparison of VELMs and root compounds are attributed to their different structures and to the sign language interface between grammar and phonetic form
  • Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2012). An empirical investigation of expression of multiple entities in Turkish Sign Language (TİD): Considering the effects of modality. Lingua, 122, 1636 -1667. doi:10.1016/j.lingua.2012.08.010.

    Abstract

    This paper explores the expression of multiple entities in Turkish Sign Language (Türk İşaret Dili; TİD), a less well-studied sign language. It aims to provide a comprehensive description of the ways and frequencies in which entity plurality in this language is expressed, both within and outside the noun phrase. We used a corpus that includes both elicited and spontaneous data from native signers. The results reveal that most of the expressions of multiple entities in TİD are iconic, spatial strategies (i.e. localization and spatial plural predicate inflection) none of which, we argue, should be considered as genuine plural marking devices with the main aim of expressing plurality. Instead, the observed devices for localization and predicate inflection allow for a plural interpretation when multiple locations in space are used. Our data do not provide evidence that TİD employs (productive) morphological plural marking (i.e. reduplication) on nouns, in contrast to some other sign languages and many spoken languages. We relate our findings to expression of multiple entities in other signed languages and in spoken languages and discuss these findings in terms of modality effects on expression of multiple entities in human language.
  • Zwitserlood, I. (2003). Word formation below and above little x: Evidence from Sign Language of the Netherlands. In Proceedings of SCL 19. Nordlyd Tromsø University Working Papers on Language and Linguistics (pp. 488-502).

    Abstract

    Although in many respects sign languages have a similar structure to that of spoken languages, the different modalities in which both types of languages are expressed cause differences in structure as well. One of the most striking differences between spoken and sign languages is the influence of the interface between grammar and PF on the surface form of utterances. Spoken language words and phrases are in general characterized by sequential strings of sounds, morphemes and words, while in sign languages we find that many phonemes, morphemes, and even words are expressed simultaneously. A linguistic model should be able to account for the structures that occur in both spoken and sign languages. In this paper, I will discuss the morphological/ morphosyntactic structure of signs in Nederlandse Gebarentaal (Sign Language of the Netherlands, henceforth NGT), with special focus on the components ‘place of articulation’ and ‘handshape’. I will focus on their multiple functions in the grammar of NGT and argue that the framework of Distributed Morphology (DM), which accounts for word formation in spoken languages, is also suited to account for the formation of structures in sign languages. First I will introduce the phonological and morphological structure of NGT signs. Then, I will briefly outline the major characteristics of the DM framework. Finally, I will account for signs that have the same surface form but have a different morphological structure by means of that framework.

Share this page