Publications

Displaying 1001 - 1020 of 1020
  • Wnuk, E., Laophairoj, R., & Majid, A. (2020). Smell terms are not rara: A semantic investigation of odor vocabulary in Thai. Linguistics, 58(4), 937-966. doi:10.1515/ling-2020-0009.
  • Wnuk, E., & Burenhult, N. (2014). Contact and isolation in hunter-gatherer language dynamics: Evidence from Maniq phonology (Aslian, Malay Peninsula). Studies in Language, 38(4), 956-981. doi:10.1075/sl.38.4.06wnu.
  • Wnuk, E., & Majid, A. (2014). Revisiting the limits of language: The odor lexicon of Maniq. Cognition, 131, 125-138. doi:10.1016/j.cognition.2013.12.008.

    Abstract

    It is widely believed that human languages cannot encode odors. While this is true for English,
    and other related languages, data from some non-Western languages challenge this
    view. Maniq, a language spoken by a small population of nomadic hunter–gatherers in
    southern Thailand, is such a language. It has a lexicon of over a dozen terms dedicated
    to smell. We examined the semantics of these smell terms in 3 experiments (exemplar
    listing, similarity judgment and off-line rating). The exemplar listing task confirmed that
    Maniq smell terms have complex meanings encoding smell qualities. Analyses of the
    similarity data revealed that the odor lexicon is coherently structured by two dimensions.
    The underlying dimensions are pleasantness and dangerousness, as verified by the off-line
    rating study. Ethnographic data illustrate that smell terms have detailed semantics tapping
    into broader cultural constructs. Contrary to the widespread view that languages cannot
    encode odors, the Maniq data show odor can be a coherent semantic domain, thus shedding
    new light on the limits of language.
  • Wong, M. M. K., Hoekstra, S. D., Vowles, J., Watson, L. M., Fuller, G., Németh, A. H., Cowley, S. A., Ansorge, O., Talbot, K., & Becker, E. B. E. (2018). Neurodegeneration in SCA14 is associated with increased PKCγ kinase activity, mislocalization and aggregation. Acta Neuropathologica Communications, 6: 99. doi:10.1186/s40478-018-0600-7.

    Abstract

    Spinocerebellar ataxia type 14 (SCA14) is a subtype of the autosomal dominant cerebellar ataxias that is characterized by slowly progressive cerebellar dysfunction and neurodegeneration. SCA14 is caused by mutations in the PRKCG gene, encoding protein kinase C gamma (PKCγ). Despite the identification of 40 distinct disease-causing mutations in PRKCG, the pathological mechanisms underlying SCA14 remain poorly understood. Here we report the molecular neuropathology of SCA14 in post-mortem cerebellum and in human patient-derived induced pluripotent stem cells (iPSCs) carrying two distinct SCA14 mutations in the C1 domain of PKCγ, H36R and H101Q. We show that endogenous expression of these mutations results in the cytoplasmic mislocalization and aggregation of PKCγ in both patient iPSCs and cerebellum. PKCγ aggregates were not efficiently targeted for degradation. Moreover, mutant PKCγ was found to be hyper-activated, resulting in increased substrate phosphorylation. Together, our findings demonstrate that a combination of both, loss-of-function and gain-of-function mechanisms are likely to underlie the pathogenesis of SCA14, caused by mutations in the C1 domain of PKCγ. Importantly, SCA14 patient iPSCs were found to accurately recapitulate pathological features observed in post-mortem SCA14 cerebellum, underscoring their potential as relevant disease models and their promise as future drug discovery tools.

    Additional information

    additional file
  • Xiong, K., Verdonschot, R. G., & Tamaoka, K. (2020). The time course of brain activity in reading identical cognates: An ERP study of Chinese - Japanese bilinguals. Journal of Neurolinguistics, 55: 100911. doi:10.1016/j.jneuroling.2020.100911.

    Abstract

    Previous studies suggest that bilinguals' lexical access is language non-selective, especially for orthographically identical translation equivalents across languages (i.e., identical cognates). The present study investigated how such words (e.g., meaning "school" in both Chinese and Japanese) are processed in the (late) Chinese - Japanese bilingual brain. Using an L2-Japanese lexical decision task, both behavioral and electrophysiological data were collected. Reaction times (RTs), as well as the N400 component, showed that cognates are more easily recognized than non-cognates. Additionally, an early component (i.e., the N250), potentially reflecting activation at the word-form level, was also found. Cognates elicited a more positive N250 than non-cognates in the frontal region, indicating that the cognate facilitation effect occurred at an early stage of word formation for languages with logographic scripts.
  • Yang, W., Chan, A., Chang, F., & Kidd, E. (2020). Four-year-old Mandarin-speaking children’s online comprehension of relative clauses. Cognition, 196: 104103. doi:10.1016/j.cognition.2019.104103.

    Abstract

    A core question in language acquisition is whether children’s syntactic processing is experience-dependent and language-specific, or whether it is governed by abstract, universal syntactic machinery. We address this question by presenting corpus and on-line processing dat a from children learning Mandarin Chinese, a language that has been important in debates about the universality of parsing processes. The corpus data revealed that two different relative clause constructions in Mandarin are differentially used to modify syntactic subjects and objects. In the experiment, 4-year-old children’s eye-movements were recorded as they listened to the two RC construction types (e.g., Can you pick up the pig that pushed the sheep?). A permutation analysis showed that children’s ease of comprehension was closely aligned with the distributional frequencies, suggesting syntactic processing preferences are shaped by the input experience of these constructions.

    Additional information

    1-s2.0-S001002771930277X-mmc1.pdf
  • Yang, J., Cai, Q., & Tian, X. (2020). How do we segment text? Two-stage chunking operation in reading. eNeuro, 7(3): ENEURO.0425-19.2020. doi:10.1523/ENEURO.0425-19.2020.

    Abstract

    Chunking in language comprehension is a process that segments continuous linguistic input into smaller chunks that are in the reader’s mental lexicon. Effective chunking during reading facilitates disambiguation and enhances efficiency for comprehension. However, the chunking mechanisms remain elusive, especially in reading given that information arrives simultaneously yet the written systems may not have explicit cues for labeling boundaries such as Chinese. What are the mechanisms of chunking that mediates the reading of the text that contains hierarchical information? We investigated this question by manipulating the lexical status of the chunks at distinct levels in four-character Chinese strings, including the two-character local chunk and four-character global chunk. Male and female human participants were asked to make lexical decisions on these strings in a behavioral experiment, followed by a passive reading task when their electroencephalography (EEG) was recorded. The behavioral results showed that the lexical decision time of lexicalized two-character local chunks was influenced by the lexical status of the four-character global chunk, but not vice versa, which indicated the processing of global chunks possessed priority over the local chunks. The EEG results revealed that familiar lexical chunks were detected simultaneously at both levels and further processed in a different temporal order – the onset of lexical access for the global chunks was earlier than that of local chunks. These consistent results suggest a two-stage operation for chunking in reading–– the simultaneous detection of familiar lexical chunks at multiple levels around 100 ms followed by recognition of chunks with global precedence.
  • Yang, J., Zhu, H., & Tian, X. (2018). Group-level multivariate analysis in EasyEEG toolbox: Examining the temporal dynamics using topographic responses. Frontiers in Neuroscience, 12: 468. doi:10.3389/fnins.2018.00468.

    Abstract

    Electroencephalography (EEG) provides high temporal resolution cognitive information from non-invasive recordings. However, one of the common practices-using a subset of sensors in ERP analysis is hard to provide a holistic and precise dynamic results. Selecting or grouping subsets of sensors may also be subject to selection bias, multiple comparison, and further complicated by individual differences in the group-level analysis. More importantly, changes in neural generators and variations in response magnitude from the same neural sources are difficult to separate, which limit the capacity of testing different aspects of cognitive hypotheses. We introduce EasyEEG, a toolbox that includes several multivariate analysis methods to directly test cognitive hypotheses based on topographic responses that include data from all sensors. These multivariate methods can investigate effects in the dimensions of response magnitude and topographic patterns separately using data in the sensor space, therefore enable assessing neural response dynamics. The concise workflow and the modular design provide user-friendly and programmer-friendly features. Users of all levels can benefit from the open-sourced, free EasyEEG to obtain a straightforward solution for efficient processing of EEG data and a complete pipeline from raw data to final results for publication.
  • Yang, Y., Dai, B., Howell, P., Wang, X., Li, K., & Lu, C. (2014). White and Grey Matter Changes in the Language Network during Healthy Aging. PLoS One, 9(9): e108077. doi: 10.1371/journal.pone.0108077.

    Abstract

    Neural structures change with age but there is no consensus on the exact processes involved. This study tested the hypothesis that white and grey matter in the language network changes during aging according to a “last in, first out” process. The fractional anisotropy (FA) of white matter and cortical thickness of grey matter were measured in 36 participants whose ages ranged from 55 to 79 years. Within the language network, the dorsal pathway connecting the mid-to-posterior superior temporal cortex (STC) and the inferior frontal cortex (IFC) was affected more by aging in both FA and thickness than the other dorsal pathway connecting the STC with the premotor cortex and the ventral pathway connecting the mid-to-anterior STC with the ventral IFC. These results were independently validated in a second group of 20 participants whose ages ranged from 50 to 73 years. The pathway that is most affected during aging matures later than the other two pathways (which are present at birth). The results are interpreted as showing that the neural structures which mature later are affected more than those that mature earlier, supporting the “last in, first out” theory.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., & Hino, Y. (2020). The influence of orthography on speech production: Evidence from masked priming in word-naming and picture-naming tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(8), 1570-1589. doi:10.1037/xlm0000829.

    Abstract

    In a masked priming word-naming task, a facilitation due to the initial-segmental sound overlap for 2-character kanji prime-target pairs was affected by certain orthographic properties (Yoshihara, Nakayama, Verdonschot, & Hino, 2017). That is, the facilitation that was due to the initial mora overlap occurred only when the mora was the whole pronunciation of their initial kanji characters (i.e., match pairs; e.g., /ka-se.ki/-/ka-rjo.ku/). When the shared initial mora was only a part of the kanji characters' readings, however, there was no facilitation (i.e., mismatch pairs; e.g., /ha.tu-a.N/-/ha.ku-bu.tu/). In the present study, we used a masked priming picture-naming task to investigate whether the previous results were relevant only when the orthography of targets is visually presented. In Experiment 1. the main findings of our word-naming task were fully replicated in a picture-naming task. In Experiments 2 and 3. the absence of facilitation for the mismatch pairs were confirmed with a new set of stimuli. On the other hand, a significant facilitation was observed for the match pairs that shared the 2 initial morae (in Experiment 4), which was again consistent with the results of our word-naming study. These results suggest that the orthographic properties constrain the phonological expression of masked priming for kanji words across 2 tasks that are likely to differ in how phonology is retrieved. Specifically, we propose that orthography of a word is activated online and constrains the phonological encoding processes in these tasks.
  • Zavala, R. (2001). Entre consejos, diablos y vendedores de caca, rasgos gramaticales deloluteco en tres de sus cuentos. Tlalocan. Revista de Fuentes para el Conocimiento de las Culturas Indígenas de México, XIII, 335-414.

    Abstract

    The three Olutec stories from Oluta, Veracruz, werenarrated by Antonio Asistente Maldonado. Roberto Zavala presents amorpheme-by-morpheme analysis of the texts with a sketch of the majorgrammatical and typological features of this language. Olutec is spoken bythree dozen speakers. The grammatical structure of this language has not beendescribed before. The sketch contains information on verb and noun morphology,verb dasses, clause types, inverse/direct patterns, grammaticalizationprocesses, applicatives, incorporation, word order type, and discontinuousexpressions. The stories presented here are the first Olutec texts everpublished. The motifs of the stories are well known throughout Middle America.The story of "the Rabbit who wants to be big" explains why one of the mainprotagonists of Middle American folktales acquired long ears. The story of "theDevil who is inebriated by the people of a village" explains how theinhabitants of a village discover the true identity of a man who likes to dancehuapango and decide to get rid of him. Finally the story of "theshit-sellers" presents two compadres, one who is lazy and the otherone who works hard. The hard-worker asks the lazy compadre how he surviveswithout working. The latter lies to to him that he sells shit in theneighboring village. The hard-working compadre decides to become a shit-sellerand in the process realizes that the lazy compadre deceived him. However, he islucky and meets with the Devil who offers him money in compensation for havingbeen deceived. When the lazy compadre realizes that the hard-working compadrehas become rich, he tries to do the same business but gets beaten in theprocess.
  • Zheng, X., Roelofs, A., & Lemhöfer, K. (2020). Language selection contributes to intrusion errors in speaking: Evidence from picture naming. Bilingualism: Language and Cognition, 23, 788-800. doi:10.1017/S1366728919000683.

    Abstract

    Bilinguals usually select the right language to speak for the particular context they are in, but sometimes the nontarget language intrudes. Despite a large body of research into language selection and language control, it remains unclear where intrusion errors originate from. These errors may be due to incorrect selection of the nontarget language at the conceptual level, or be a consequence of erroneous word selection (despite correct language selection) at the lexical level. We examined the former possibility in two language switching experiments using a manipulation that supposedly affects language selection on the conceptual level, namely whether the conversational language context was associated with the target language (congruent) or with the alternative language (incongruent) on a trial. Both experiments showed that language intrusion errors occurred more often in incongruent than in congruent contexts, providing converging evidence that language selection during concept preparation is one driving force behind language intrusion.
  • Zheng, X., Roelofs, A., Erkan, H., & Lemhöfer, K. (2020). Dynamics of inhibitory control during bilingual speech production: An electrophysiological study. Neuropsychologia, 140: 107387. doi:10.1016/j.neuropsychologia.2020.107387.

    Abstract

    Bilingual speakers have to control their languages to avoid interference, which may be achieved by enhancing the target language and/or inhibiting the nontarget language. Previous research suggests that bilinguals use inhibition (e.g., Jackson et al., 2001), which should be reflected in the N2 component of the event-related potential (ERP) in the EEG. In the current study, we investigated the dynamics of inhibitory control by measuring the N2 during language switching and repetition in bilingual picture naming. Participants had to name pictures in Dutch or English depending on the cue. A run of same-language trials could be short (two or three trials) or long (five or six trials). We assessed whether RTs and N2 changed over the course of same-language runs, and at a switch between languages. Results showed that speakers named pictures more quickly late as compared to early in a run of same-language trials. Moreover, they made a language switch more quickly after a long run than after a short run. This run-length effect was only present in the first language (L1), not in the second language (L2). In ERPs, we observed a widely distributed switch effect in the N2, which was larger after a short run than after a long run. This effect was only present in the L2, not in the L1, although the difference was not significant between languages. In contrast, the N2 was not modulated during a same-language run. Our results suggest that the nontarget language is inhibited at a switch, but not during the repeated use of the target language.

    Additional information

    Data availability

    Files private

    Request files
  • Zheng, X., Roelofs, A., Farquhar, J., & Lemhöfer, K. (2018). Monitoring of language selection errors in switching: Not all about conflict. PLoS One, 13(11): e0200397. doi:10.1371/journal.pone.0200397.

    Abstract

    Although bilingual speakers are very good at selectively using one language rather than another, sometimes language selection errors occur. To investigate how bilinguals monitor their speech errors and control their languages in use, we recorded event-related potentials (ERPs) in unbalanced Dutch-English bilingual speakers in a cued language-switching task. We tested the conflict-based monitoring model of Nozari and colleagues by investigating the error-related negativity (ERN) and comparing the effects of the two switching directions (i.e., to the first language, L1 vs. to the second language, L2). Results show that the speakers made more language selection errors when switching from their L2 to the L1 than vice versa. In the EEG, we observed a robust ERN effect following language selection errors compared to correct responses, reflecting monitoring of speech errors. Most interestingly, the ERN effect was enlarged when the speakers were switching to their L2 (less conflict) compared to switching to the L1 (more conflict). Our findings do not support the conflict-based monitoring model. We discuss an alternative account in terms of error prediction and reinforcement learning.
  • Zheng, X., Roelofs, A., & Lemhöfer, K. (2018). Language selection errors in switching: language priming or cognitive control? Language, Cognition and Neuroscience, 33(2), 139-147. doi:10.1080/23273798.2017.1363401.

    Abstract

    Although bilingual speakers are very good at selectively using one language rather than another, sometimes language selection errors occur. We examined the relative contribution of top-down cognitive control and bottom-up language priming to these errors. Unbalanced Dutch-English bilinguals named pictures and were cued to switch between languages under time pressure. We also manipulated the number of same-language trials before a switch (long vs. short runs). Results show that speakers made more language selection errors when switching from their second language (L2) to the first language (L1) than vice versa. Furthermore, they made more errors when switching to the L1 after a short compared to a long run of L2 trials. In the reverse switching direction (L1 to L2), run length had no effect. These findings are most compatible with an account of language selection errors that assigns a strong role to top-down processes of cognitive control.

    Additional information

    plcp_a_1363401_sm2537.docx
  • Zoefel, B., Ten Oever, S., & Sack, A. T. (2018). The involvement of endogenous neural oscillations in the processing of rhythmic input: More than a regular repetition of evoked neural responses. Frontiers in Neuroscience, 12: 95. doi:10.3389/fnins.2018.00095.

    Abstract

    It is undisputed that presenting a rhythmic stimulus leads to a measurable brain response that follows the rhythmic structure of this stimulus. What is still debated, however, is the question whether this brain response exclusively reflects a regular repetition of evoked responses, or whether it also includes entrained oscillatory activity. Here we systematically present evidence in favor of an involvement of entrained neural oscillations in the processing of rhythmic input while critically pointing out which questions still need to be addressed before this evidence could be considered conclusive. In this context, we also explicitly discuss the potential functional role of such entrained oscillations, suggesting that these stimulus-aligned oscillations reflect, and serve as, predictive processes, an idea often only implicitly assumed in the literature.
  • Zora, H., Rudner, M., & Montell Magnusson, A. (2020). Concurrent affective and linguistic prosody with the same emotional valence elicits a late positive ERP response. European Journal of Neuroscience, 51(11), 2236-2249. doi:10.1111/ejn.14658.

    Abstract

    Change in linguistic prosody generates a mismatch negativity response (MMN), indicating neural representation of linguistic prosody, while change in affective prosody generates a positive response (P3a), reflecting its motivational salience. However, the neural response to concurrent affective and linguistic prosody is unknown. The present paper investigates the integration of these two prosodic features in the brain by examining the neural response to separate and concurrent processing by electroencephalography (EEG). A spoken pair of Swedish words—[ˈfɑ́ːsɛn] phase and [ˈfɑ̀ːsɛn] damn—that differed in emotional semantics due to linguistic prosody was presented to 16 subjects in an angry and neutral affective prosody using a passive auditory oddball paradigm. Acoustically matched pseudowords—[ˈvɑ́ːsɛm] and [ˈvɑ̀ːsɛm]—were used as controls. Following the constructionist concept of emotions, accentuating the conceptualization of emotions based on language, it was hypothesized that concurrent affective and linguistic prosody with the same valence—angry [ˈfɑ̀ːsɛn] damn—would elicit a unique late EEG signature, reflecting the temporal integration of affective voice with emotional semantics of prosodic origin. In accordance, linguistic prosody elicited an MMN at 300–350 ms, and affective prosody evoked a P3a at 350–400 ms, irrespective of semantics. Beyond these responses, concurrent affective and linguistic prosody evoked a late positive component (LPC) at 820–870 ms in frontal areas, indicating the conceptualization of affective prosody based on linguistic prosody. This study provides evidence that the brain does not only distinguish between these two functions of prosody but also integrates them based on language and experience.
  • De Zubicaray, G. I., Hartsuiker, R. J., & Acheson, D. J. (2014). Mind what you say—general and specific mechanisms for monitoring in speech production. Frontiers in Human Neuroscience, 8: 514. doi:10.3389%2Ffnhum.2014.00514.

    Abstract

    For most people, speech production is relatively effortless and error-free. Yet it has long been recognized that we need some type of control over what we are currently saying and what we plan to say. Precisely how we monitor our internal and external speech has been a topic of research interest for several decades. The predominant approach in psycholinguistics has assumed monitoring of both is accomplished via systems responsible for comprehending others' speech.

    This special topic aimed to broaden the field, firstly by examining proposals that speech production might also engage more general systems, such as those involved in action monitoring. A second aim was to examine proposals for a production-specific, internal monitor. Both aims require that we also specify the nature of the representations subject to monitoring.
  • Zuidema, W., French, R. M., Alhama, R. G., Ellis, K., O'Donnell, T. J. O., Sainburgh, T., & Gentner, T. Q. (2020). Five ways in which computational modeling can help advance cognitive science: Lessons from artificial grammar learning. Topics in Cognitive Science, 12(3), 925-941. doi:10.1111/tops.12474.

    Abstract

    There is a rich tradition of building computational models in cognitive science, but modeling, theoretical, and experimental research are not as tightly integrated as they could be. In this paper, we show that computational techniques—even simple ones that are straightforward to use—can greatly facilitate designing, implementing, and analyzing experiments, and generally help lift research to a new level. We focus on the domain of artificial grammar learning, and we give five concrete examples in this domain for (a) formalizing and clarifying theories, (b) generating stimuli, (c) visualization, (d) model selection, and (e) exploring the hypothesis space.
  • Zumer, J. M., Scheeringa, R., Schoffelen, J.-M., Norris, D. G., & Jensen, O. (2014). Occipital alpha activity during stimulus processing gates the information flow to object-selective cortex. PLoS Biology, 12(10): e1001965. doi:10.1371/journal.pbio.1001965.

    Abstract

    Given the limited processing capabilities of the sensory system, it is essential that attended information is gated to downstream areas, whereas unattended information is blocked. While it has been proposed that alpha band (8–13 Hz) activity serves to route information to downstream regions by inhibiting neuronal processing in task-irrelevant regions, this hypothesis remains untested. Here we investigate how neuronal oscillations detected by electroencephalography in visual areas during working memory encoding serve to gate information reflected in the simultaneously recorded blood-oxygenation-level-dependent (BOLD) signals recorded by functional magnetic resonance imaging in downstream ventral regions. We used a paradigm in which 16 participants were presented with faces and landscapes in the right and left hemifields; one hemifield was attended and the other unattended. We observed that decreased alpha power contralateral to the attended object predicted the BOLD signal representing the attended object in ventral object-selective regions. Furthermore, increased alpha power ipsilateral to the attended object predicted a decrease in the BOLD signal representing the unattended object. We also found that the BOLD signal in the dorsal attention network inversely correlated with visual alpha power. This is the first demonstration, to our knowledge, that oscillations in the alpha band are implicated in the gating of information from the visual cortex to the ventral stream, as reflected in the representationally specific BOLD signal. This link of sensory alpha to downstream activity provides a neurophysiological substrate for the mechanism of selective attention during stimulus processing, which not only boosts the attended information but also suppresses distraction. Although previous studies have shown a relation between the BOLD signal from the dorsal attention network and the alpha band at rest, we demonstrate such a relation during a visuospatial task, indicating that the dorsal attention network exercises top-down control of visual alpha activity.

Share this page