Publications

Displaying 1001 - 1026 of 1026
  • Wolf, M. C., Meyer, A. S., Rowland, C. F., & Hintz, F. (2021). The effects of input modality, word difficulty and reading experience on word recognition accuracy. Collabra: Psychology, 7(1): 24919. doi:10.1525/collabra.24919.

    Abstract

    Language users encounter words in at least two different modalities. Arguably, the most frequent encounters are in spoken or written form. Previous research has shown that – compared to the spoken modality – written language features more difficult words. Thus, frequent reading might have effects on word recognition. In the present study, we investigated 1) whether input modality (spoken, written, or bimodal) has an effect on word recognition accuracy, 2) whether this modality effect interacts with word difficulty, 3) whether the interaction of word difficulty and reading experience impacts word recognition accuracy, and 4) whether this interaction is influenced by input modality. To do so, we re-analysed a dataset that was collected in the context of a vocabulary test development to assess in which modality test words should be presented. Participants had carried out a word recognition task, where non-words and words of varying difficulty were presented in auditory, visual and audio-visual modalities. In addition to this main experiment, participants had completed a receptive vocabulary and an author recognition test to measure their reading experience. Our re-analyses did not reveal evidence for an effect of input modality on word recognition accuracy, nor for interactions with word difficulty or language experience. Word difficulty interacted with reading experience in that frequent readers were more accurate in recognizing difficult words than individuals who read less frequently. Practical implications are discussed.
  • Wongratwanich, P., Shimabukuro, K., Konishi, M., Nagasaki, T., Ohtsuka, M., Suei, Y., Nakamoto, T., Verdonschot, R. G., Kanesaki, T., Sutthiprapaporn, P., & Kakimoto, N. (2021). Do various imaging modalities provide potential early detection and diagnosis of medication-related osteonecrosis of the jaw? A review. Dentomaxillofacial Radiology, 50: 20200417. doi:10.1259/dmfr.20200417.

    Abstract


    Objective: Patients with medication-related osteonecrosis of the jaw (MRONJ) often visit their dentists at advanced stages and subsequently require treatments that greatly affect quality of life. Currently, no clear diagnostic criteria exist to assess MRONJ, and the definitive diagnosis solely relies on clinical bone exposure. This ambiguity leads to a diagnostic delay, complications, and unnecessary burden. This article aims to identify imaging modalities' usage and findings of MRONJ to provide possible approaches for early detection.

    Methods: Literature searches were conducted using PubMed, Web of Science, Scopus, and Cochrane Library to review all diagnostic imaging modalities for MRONJ.

    Results: Panoramic radiography offers a fundamental understanding of the lesions. Imaging findings were comparable between non-exposed and exposed MRONJ, showing osteolysis, osteosclerosis, and thickened lamina dura. Mandibular cortex index Class II could be a potential early MRONJ indicator. While three-dimensional modalities, CT and CBCT, were able to show more features unique to MRONJ such as a solid type periosteal reaction, buccal predominance of cortical perforation, and bone-within-bone appearance. MRI signal intensities of vital bones are hypointense on T1WI and hyperintense on T2WI and STIR when necrotic bone shows hypointensity on all T1WI, T2WI, and STIR. Functional imaging is the most sensitive method but is usually performed in metastasis detection rather than being a diagnostic tool for early MRONJ.

    Conclusion: Currently, MRONJ-specific imaging features cannot be firmly established. However, the current data are valuable as it may lead to a more efficient diagnostic procedure along with a more suitable selection of imaging modalities.
  • Wright, S. E., Windhouwer, M., Schuurman, I., & Kemps-Snijders, M. (2013). Community efforts around the ISOcat Data Category Registry. In I. Gurevych, & J. Kim (Eds.), The People's Web meets NLP: Collaboratively constructed language resources (pp. 349-374). New York: Springer.

    Abstract

    The ISOcat Data Category Registry provides a community computing environment for creating, storing, retrieving, harmonizing and standardizing data category specifications (DCs), used to register linguistic terms used in various fields. This chapter recounts the history of DC documentation in TC 37, beginning from paper-based lists created for lexicographers and terminologists and progressing to the development of a web-based resource for a much broader range of users. While describing the considerable strides that have been made to collect a very large comprehensive collection of DCs, it also outlines difficulties that have arisen in developing a fully operative web-based computing environment for achieving consensus on data category names, definitions, and selections and describes efforts to overcome some of the present shortcomings and to establish positive working procedures designed to engage a wide range of people involved in the creation of language resources.
  • Wright, S. E., & Windhouwer, M. (2013). ISOcat - im Reich der Datenkategorien. eDITion: Fachzeitschrift für Terminologie, 9(1), 8-12.

    Abstract

    Im ISOcat-Datenkategorie-Register (Data Category Registry, www.isocat.org) des Technischen Komitees ISO/TC 37 (Terminology and other language and content resources) werden Feldnamen und Werte für Sprachressourcen beschrieben. Empfohlene Feldnamen und zuverlässige Definitionen sollen dazu beitragen, dass Sprachdaten unabhängig von Anwendungen, Plattformen und Communities of Practice (CoP) wiederverwendet werden können. Datenkategorie-Gruppen (Data Category Selections) können eingesehen, ausgedruckt, exportiert und nach kostenloser Registrierung auch neu erstellt werden.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., Hino, Y., & Lupker, S. J. (2021). Orthographic properties of distractors do influence phonological Stroop effects: Evidence from Japanese Romaji distractors. Memory & Cognition, 49(3), 600-612. doi:10.3758/s13421-020-01103-8.

    Abstract

    In attempting to understand mental processes, it is important to use a task that appropriately reflects the underlying processes being investigated. Recently, Verdonschot and Kinoshita (Memory & Cognition, 46,410-425, 2018) proposed that a variant of the Stroop task-the "phonological Stroop task"-might be a suitable tool for investigating speech production. The major advantage of this task is that the task is apparently not affected by the orthographic properties of the stimuli, unlike other, commonly used, tasks (e.g., associative-cuing and word-reading tasks). The viability of this proposal was examined in the present experiments by manipulating the script types of Japanese distractors. For Romaji distractors (e.g., "kushi"), color-naming responses were faster when the initial phoneme was shared between the color name and the distractor than when the initial phonemes were different, thereby showing a phoneme-based phonological Stroop effect (Experiment1). In contrast, no such effect was observed when the same distractors were presented in Katakana (e.g., "< ") pound, replicating Verdonschot and Kinoshita's original results (Experiment2). A phoneme-based effect was again found when the Katakana distractors used in Verdonschot and Kinoshita's original study were transcribed and presented in Romaji (Experiment3). Because the observation of a phonemic effectdirectly depended on the orthographic properties of the distractor stimuli, we conclude that the phonological Stroop task is also susceptible to orthographic influences.
  • Zaadnoordijk, L., Buckler, H., Cusack, R., Tsuji, S., & Bergmann, C. (2021). A global perspective on testing infants online: Introducing ManyBabies-AtHome. Frontiers in Psychology, 12: 703234. doi:10.3389/fpsyg.2021.703234.

    Abstract

    Online testing holds great promise for infant scientists. It could increase participant diversity, improve reproducibility and collaborative possibilities, and reduce costs for researchers and participants. However, despite the rise of platforms and participant databases, little work has been done to overcome the challenges of making this approach available to researchers across the world. In this paper, we elaborate on the benefits of online infant testing from a global perspective and identify challenges for the international community that have been outside of the scope of previous literature. Furthermore, we introduce ManyBabies-AtHome, an international, multi-lab collaboration that is actively working to facilitate practical and technical aspects of online testing as well as address ethical concerns regarding data storage and protection, and cross-cultural variation. The ultimate goal of this collaboration is to improve the method of testing infants online and make it globally available.
  • Zeshan, U., Escobedo Delgado, C. E., Dikyuva, H., Panda, S., & De Vos, C. (2013). Cardinal numerals in rural sign languages: Approaching cross-modal typology. Linguistic Typology, 17(3), 357-396. doi:10.1515/lity-2013-0019.

    Abstract

    This article presents data on cardinal numerals in three sign languages from small-scale communities with hereditary deafness. The unusual features found in these data considerably extend the known range of typological variety across sign languages. Some features, such as non-decimal numeral bases, are unattested in sign languages, but familiar from spoken languages, while others, such as subtractive sub-systems, are rare in sign and speech. We conclude that for a complete typological appraisal of a domain, an approach to cross-modal typology, which includes a typologically diverse range of sign languages in addition to spoken languages, is both instructive and feasible.
  • Zhang, Y., Ding, R., Frassinelli, D., Tuomainen, J., Klavinskis-Whiting, S., & Vigliocco, G. (2021). Electrophysiological signatures of second language multimodal comprehension. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2971-2977). Vienna: Cognitive Science Society.

    Abstract

    Language is multimodal: non-linguistic cues, such as prosody,
    gestures and mouth movements, are always present in face-to-
    face communication and interact to support processing. In this
    paper, we ask whether and how multimodal cues affect L2
    processing by recording EEG for highly proficient bilinguals
    when watching naturalistic materials. For each word, we
    quantified surprisal and the informativeness of prosody,
    gestures, and mouth movements. We found that each cue
    modulates the N400: prosodic accentuation, meaningful
    gestures, and informative mouth movements all reduce N400.
    Further, effects of meaningful gestures but not mouth
    informativeness are enhanced by prosodic accentuation,
    whereas effects of mouth are enhanced by meaningful gestures
    but reduced by beat gestures. Compared with L1, L2
    participants benefit less from cues and their interactions, except
    for meaningful gestures and mouth movements. Thus, in real-
    world language comprehension, L2 comprehenders use
    multimodal cues just as L1 speakers albeit to a lesser extent.
  • Yu, C., Zhang, Y., Slone, L. K., & Smith, L. B. (2021). The infant’s view redefines the problem of referential uncertainty in early word learning. Proceedings of the National Academy of Sciences of the United States of America, 118(52): e2107019118. doi:10.1073/pnas.2107019118.

    Abstract

    The learning of first object names is deemed a hard problem due to the uncertainty inherent in mapping a heard name to the intended referent in a cluttered and variable world. However, human infants readily solve this problem. Despite considerable theoretical discussion, relatively little is known about the uncertainty infants face in the real world. We used head-mounted eye tracking during parent–infant toy play and quantified the uncertainty by measuring the distribution of infant attention to the potential referents when a parent named both familiar and unfamiliar toy objects. The results show that infant gaze upon hearing an object name is often directed to a single referent which is equally likely to be a wrong competitor or the intended target. This bimodal gaze distribution clarifies and redefines the uncertainty problem and constrains possible solutions.
  • Zhang, Y., Yurovsky, D., & Yu, C. (2021). Cross-situational learning from ambiguous egocentric input is a continuous process: Evidence using the human simulation paradigm. Cognitive Science, 45(7): e13010. doi:10.1111/cogs.13010.

    Abstract

    Recent laboratory experiments have shown that both infant and adult learners can acquire word-referent mappings using cross-situational statistics. The vast majority of the work on this topic has used unfamiliar objects presented on neutral backgrounds as the visual contexts for word learning. However, these laboratory contexts are much different than the real-world contexts in which learning occurs. Thus, the feasibility of generalizing cross-situational learning beyond the laboratory is in question. Adapting the Human Simulation Paradigm, we conducted a series of experiments examining cross-situational learning from children's egocentric videos captured during naturalistic play. Focusing on individually ambiguous naming moments that naturally occur during toy play, we asked how statistical learning unfolds in real time through accumulating cross-situational statistics in naturalistic contexts. We found that even when learning situations were individually ambiguous, learners' performance gradually improved over time. This improvement was driven in part by learners' use of partial knowledge acquired from previous learning situations, even when they had not yet discovered correct word-object mappings. These results suggest that word learning is a continuous process by means of real-time information integration.
  • Zhang, Y., Amatuni, A., Cain, E., Wang, X., Crandall, D., & Yu, C. (2021). Human learners integrate visual and linguistic information cross-situational verb learning. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2267-2273). Vienna: Cognitive Science Society.

    Abstract

    Learning verbs is challenging because it is difficult to infer the precise meaning of a verb when there are a multitude of relations that one can derive from a single event. To study this verb learning challenge, we used children's egocentric view collected from naturalistic toy-play interaction as learning materials and investigated how visual and linguistic information provided in individual naming moments as well as cross-situational information provided from multiple learning moments can help learners resolve this mapping problem using the Human Simulation Paradigm. Our results show that learners benefit from seeing children's egocentric views compared to third-person observations. In addition, linguistic information can help learners identify the correct verb meaning by eliminating possible meanings that do not belong to the linguistic category. Learners are also able to integrate visual and linguistic information both within and across learning situations to reduce the ambiguity in the space of possible verb meanings.
  • Zhang, Y., Ding, R., Frassinelli, D., Tuomainen, J., Klavinskis-Whiting, S., & Vigliocco, G. (2023). The role of multimodal cues in second language comprehension. Scientific Reports, 13: 20824. doi:10.1038/s41598-023-47643-2.

    Abstract

    In face-to-face communication, multimodal cues such as prosody, gestures, and mouth movements can play a crucial role in language processing. While several studies have addressed how these cues contribute to native (L1) language processing, their impact on non-native (L2) comprehension is largely unknown. Comprehension of naturalistic language by L2 comprehenders may be supported by the presence of (at least some) multimodal cues, as these provide correlated and convergent information that may aid linguistic processing. However, it is also the case that multimodal cues may be less used by L2 comprehenders because linguistic processing is more demanding than for L1 comprehenders, leaving more limited resources for the processing of multimodal cues. In this study, we investigated how L2 comprehenders use multimodal cues in naturalistic stimuli (while participants watched videos of a speaker), as measured by electrophysiological responses (N400) to words, and whether there are differences between L1 and L2 comprehenders. We found that prosody, gestures, and informative mouth movements each reduced the N400 in L2, indexing easier comprehension. Nevertheless, L2 participants showed weaker effects for each cue compared to L1 comprehenders, with the exception of meaningful gestures and informative mouth movements. These results show that L2 comprehenders focus on specific multimodal cues – meaningful gestures that support meaningful interpretation and mouth movements that enhance the acoustic signal – while using multimodal cues to a lesser extent than L1 comprehenders overall.

    Additional information

    supplementary materials
  • Wu, S., Zhao, J., de Villiers, J., Liu, X. L., Rolfhus, E., Sun, X. N., Li, X. Y., Pan, H., Wang, H. W., Zhu, Q., Dong, Y. Y., Zhang, Y. T., & Jiang, F. (2023). Prevalence, co-occurring difficulties, and risk factors of developmental language disorder: First evidence for Mandarin-speaking children in a population-based study. The Lancet Regional Health - Western Pacific, 34: 100713. doi:10.1016/j.lanwpc.2023.100713.

    Abstract

    Background: Developmental language disorder (DLD) is a condition that significantly affects children's achievement but has been understudied. We aim to estimate the prevalence of DLD in Shanghai, compare the co-occurrence of difficulties between children with DLD and those with typical development (TD), and investigate the early risk factors for DLD.

    Methods: We estimated DLD prevalence using data from a population-based survey with a cluster random sampling design in Shanghai, China. A subsample of children (aged 5-6 years) received an onsite evaluation, and each child was categorized as TD or DLD. The proportions of children with socio-emotional behavior (SEB) difficulties, low non-verbal IQ (NVIQ), and poor school readiness were calculated among children with TD and DLD. We used multiple imputation to address the missing values of risk factors. Univariate and multivariate regression models adjusted with sampling weights were used to estimate the correlation of each risk factor with DLD.

    Findings: Of 1082 children who were approached for the onsite evaluation, 974 (90.0%) completed the language ability assessments, of whom 74 met the criteria for DLD, resulting in a prevalence of 8.5% (95% CI 6.3-11.5) when adjusted with sampling weights. Compared with TD children, children with DLD had higher rates of concurrent difficulties, including SEB (total difficulties score at-risk: 156 [17.3%] of 900 TD vs. 28 [37.8%] of 74 DLD, p < 0.0001), low NVIQ (3 [0.3%] of 900 TD vs. 8 [10.8%] of 74 DLD, p < 0.0001), and poor school readiness (71 [7.9%] of 900 TD vs. 13 [17.6%] of 74 DLD, p = 0.0040). After accounting for all other risk factors, a higher risk of DLD was associated with a lack of parent-child interaction diversity (adjusted odds ratio [aOR] = 3.08, 95% CI = 1.29-7.37; p = 0.012) and lower kindergarten levels (compared to demonstration and first level: third level (aOR = 6.15, 95% CI = 1.92-19.63; p = 0.0020)).

    Interpretation: The prevalence of DLD and its co-occurrence with other difficulties suggest the need for further attention. Family and kindergarten factors were found to contribute to DLD, suggesting that multi-sector coordinated efforts are needed to better identify and serve DLD populations at home, in schools, and in clinical settings.

    Funding: The study was supported by Shanghai Municipal Education Commission (No. 2022you1-2, D1502), the Innovative Research Team of High-level Local Universities in Shanghai (No. SHSMU-ZDCX20211900), Shanghai Municipal Health Commission (No.GWV-10.1-XK07), and the National Key Research and Development Program of China (No. 2022YFC2705201).
  • Zhong, S., Wei, L., Zhao, C., Yang, L., Di, Z., Francks, C., & Gong, G. (2021). Interhemispheric relationship of genetic influence on human brain connectivity. Cerebral Cortex, 31(1), 77-88. doi:10.1093/cercor/bhaa207.

    Abstract

    To understand the origins of interhemispheric differences and commonalities/coupling in human brain wiring, it is crucial to determine how homologous interregional connectivities of the left and right hemispheres are genetically determined and related. To address this, in the present study, we analyzed human twin and pedigree samples with high-quality diffusion magnetic resonance imaging tractography and estimated the heritability and genetic correlation of homologous left and right white matter (WM) connections. The results showed that the heritability of WM connectivity was similar and coupled between the 2 hemispheres and that the degree of overlap in genetic factors underlying homologous WM connectivity (i.e., interhemispheric genetic correlation) varied substantially across the human brain: from complete overlap to complete nonoverlap. Particularly, the heritability was significantly stronger and the chance of interhemispheric complete overlap in genetic factors was higher in subcortical WM connections than in cortical WM connections. In addition, the heritability and interhemispheric genetic correlations were stronger for long-range connections than for short-range connections. These findings highlight the determinants of the genetics underlying WM connectivity and its interhemispheric relationships, and provide insight into genetic basis of WM connectivity asymmetries in both healthy and disease states.

    Additional information

    Supplementary data
  • Zhou, W., Broersma, M., & Cutler, A. (2021). Asymmetric memory for birth language perception versus production in young international adoptees. Cognition, 213: 104788. doi:10.1016/j.cognition.2021.104788.

    Abstract

    Adults who as children were adopted into a different linguistic community retain knowledge of their birth language. The possession (without awareness) of such knowledge is known to facilitate the (re)learning of birth-language speech patterns; this perceptual learning predicts such adults' production success as well, indicating that the retained linguistic knowledge is abstract in nature. Adoptees' acquisition of their adopted language is fast and complete; birth-language mastery disappears rapidly, although this latter process has been little studied. Here, 46 international adoptees from China aged four to 10 years, with Dutch as their new language, plus 47 matched non-adopted Dutch-native controls and 40 matched non-adopted Chinese controls, undertook across a two-week period 10 blocks of training in perceptually identifying Chinese speech contrasts (one segmental, one tonal) which were unlike any Dutch contrasts. Chinese controls easily accomplished all these tasks. The same participants also provided speech production data in an imitation task. In perception, adoptees and Dutch controls scored equivalently poorly at the outset of training; with training, the adoptees significantly improved while the Dutch controls did not. In production, adoptees' imitations both before and after training could be better identified, and received higher goodness ratings, than those of Dutch controls. The perception results confirm that birth-language knowledge is stored and can facilitate re-learning in post-adoption childhood; the production results suggest that although processing of phonological category detail appears to depend on access to the stored knowledge, general articulatory dimensions can at this age also still be remembered, and may facilitate spoken imitation.

    Additional information

    stimulus materials
  • Zimianiti, E. (2021). Adjective-noun constructions in Griko: Focusing on measuring adjectives and their placement in the nominal domain. LingUU Journal, 5(2), 62-75.

    Abstract

    This paper examines adjectival placement in Griko, an Italian-Greek lan-
    guage variety. Guardiano and Stavrou (2019, 2014) have argued that
    there is a gap of evidence in the diachrony of adjectives in prenominal
    position and in particular, of measuring adjectives. Evidence is presented
    in this paper contradicting the aforementioned claims. After considering
    the placement of adjectives in Greek and Italian, and their similarities
    and differences, the adjectival pattern of Griko is analysed. The analysis
    is based mostly on written data from the early 20th century proving the
    prenominal position of adjectives and adding to the diachronic schema of
    adjectival placement in Griko.
  • Zimianiti, E., Dimitrakopoulou, M., & Tsangalidis, A. (2021). Τhematic roles in dementia: The case of psychological verbs. In A. Botinis (Ed.), ExLing 2021: Proceedings of the 12th International Conference of Experimental Linguistics (pp. 269-272). Athens, Greece: ExLing Society.

    Abstract

    This study investigates the difficulty of people with Mild Cognitive Impairment (MCI), mild and moderate Alzheimer’s disease (AD) in the production and comprehension of psychological verbs, as thematic realization may involve both the canonical and non-canonical realization of arguments. More specifically, we aim to examine whether there is a deficit in the mapping of syntactic and semantic representations in psych-predicates regarding Greek-speaking individuals with MCI and AD, and whether the linguistic abilities associated with θ-role assignment decrease as the disease progresses. Moreover, given the decline of cognitive abilities in people with MCI and AD, we explore the effects of components of memory (Semantic, Episodic, and Working Memory) on the assignment of thematic roles in constructions with psychological verbs.
  • Zinken, J., Kaiser, J., Weidner, M., Mondada, L., Rossi, G., & Sorjonen, M.-L. (2021). Rule talk: Instructing proper play with impersonal deontic statements. Frontiers in Communication, 6: 660394. doi:10.3389/fcomm.2021.660394.

    Abstract

    The present paper explores how rules are enforced and talked about in everyday life. Drawing on a corpus of board game recordings across European languages, we identify a sequential and praxeological context for rule talk. After a game rule is breached, a participant enforces proper play and then formulates a rule with an impersonal deontic statement (e.g. ‘It’s not allowed to do this’). Impersonal deontic statements express what may or may not be done without tying the obligation to a particular individual. Our analysis shows that such statements are used as part of multi-unit and multi-modal turns where rule talk is accomplished through both grammatical and embodied means. Impersonal deontic statements serve multiple interactional goals: they account for having changed another’s behavior in the moment and at the same time impart knowledge for the future. We refer to this complex action as an “instruction”. The results of this study advance our understanding of rules and rule-following in everyday life, and of how resources of language and the body are combined to enforce and formulate rules.
  • Zioga, I., Weissbart, H., Lewis, A. G., Haegens, S., & Martin, A. E. (2023). Naturalistic spoken language comprehension is supported by alpha and beta oscillations. The Journal of Neuroscience, 43(20), 3718-3732. doi:10.1523/JNEUROSCI.1500-22.2023.

    Abstract

    Brain oscillations are prevalent in all species and are involved in numerous perceptual operations. α oscillations are thought to facilitate processing through the inhibition of task-irrelevant networks, while β oscillations are linked to the putative reactivation of content representations. Can the proposed functional role of α and β oscillations be generalized from low-level operations to higher-level cognitive processes? Here we address this question focusing on naturalistic spoken language comprehension. Twenty-two (18 female) Dutch native speakers listened to stories in Dutch and French while MEG was recorded. We used dependency parsing to identify three dependency states at each word: the number of (1) newly opened dependencies, (2) dependencies that remained open, and (3) resolved dependencies. We then constructed forward models to predict α and β power from the dependency features. Results showed that dependency features predict α and β power in language-related regions beyond low-level linguistic features. Left temporal, fundamental language regions are involved in language comprehension in α, while frontal and parietal, higher-order language regions, and motor regions are involved in β. Critically, α- and β-band dynamics seem to subserve language comprehension tapping into syntactic structure building and semantic composition by providing low-level mechanistic operations for inhibition and reactivation processes. Because of the temporal similarity of the α-β responses, their potential functional dissociation remains to be elucidated. Overall, this study sheds light on the role of α and β oscillations during naturalistic spoken language comprehension, providing evidence for the generalizability of these dynamics from perceptual to complex linguistic processes.
  • Zora, H., Riad, T., Ylinen, S., & Csépe, V. (2021). Phonological variations are compensated at the lexical level: Evidence from auditory neural activity. Frontiers in Human Neuroscience, 15: 622904. doi:10.3389/fnhum.2021.622904.

    Abstract

    Dealing with phonological variations is important for speech processing. This article addresses whether phonological variations introduced by assimilatory processes are compensated for at the pre-lexical or lexical level, and whether the nature of variation and the phonological context influence this process. To this end, Swedish nasal regressive place assimilation was investigated using the mismatch negativity (MMN) component. In nasal regressive assimilation, the coronal nasal assimilates to the place of articulation of a following segment, most clearly with a velar or labial place of articulation, as in utan mej “without me” > [ʉːtam mɛjː]. In a passive auditory oddball paradigm, 15 Swedish speakers were presented with Swedish phrases with attested and unattested phonological variations and contexts for nasal assimilation. Attested variations – a coronal-to-labial change as in utan “without” > [ʉːtam] – were contrasted with unattested variations – a labial-to-coronal change as in utom “except” > ∗[ʉːtɔn] – in appropriate and inappropriate contexts created by mej “me” [mɛjː] and dej “you” [dɛjː]. Given that the MMN amplitude depends on the degree of variation between two stimuli, the MMN responses were expected to indicate to what extent the distance between variants was tolerated by the perceptual system. Since the MMN response reflects not only low-level acoustic processing but also higher-level linguistic processes, the results were predicted to indicate whether listeners process assimilation at the pre-lexical and lexical levels. The results indicated no significant interactions across variations, suggesting that variations in phonological forms do not incur any cost in lexical retrieval; hence such variation is compensated for at the lexical level. However, since the MMN response reached significance only for a labial-to-coronal change in a labial context and for a coronal-to-labial change in a coronal context, the compensation might have been influenced by the nature of variation and the phonological context. It is therefore concluded that while assimilation is compensated for at the lexical level, there is also some influence from pre-lexical processing. The present results reveal not only signal-based perception of phonological units, but also higher-level lexical processing, and are thus able to reconcile the bottom-up and top-down models of speech processing.
  • Zora, H., & Csépe, V. (2021). Perception of Prosodic Modulations of Linguistic and Paralinguistic Origin: Evidence From Early Auditory Event-Related Potentials. Frontiers in Neuroscience, 15: 797487. doi:10.3389/fnins.2021.797487.

    Abstract

    How listeners handle prosodic cues of linguistic and paralinguistic origin is a central question for spoken communication. In the present EEG study, we addressed this question by examining neural responses to variations in pitch accent (linguistic) and affective (paralinguistic) prosody in Swedish words, using a passive auditory oddball paradigm. The results indicated that changes in pitch accent and affective prosody elicited mismatch negativity (MMN) responses at around 200 ms, confirming the brain’s pre-attentive response to any prosodic modulation. The MMN amplitude was, however, statistically larger to the deviation in affective prosody in comparison to the deviation in pitch accent and affective prosody combined, which is in line with previous research indicating not only a larger MMN response to affective prosody in comparison to neutral prosody but also a smaller MMN response to multidimensional deviants than unidimensional ones. The results, further, showed a significant P3a response to the affective prosody change in comparison to the pitch accent change at around 300 ms, in accordance with previous findings showing an enhanced positive response to emotional stimuli. The present findings provide evidence for distinct neural processing of different prosodic cues, and statistically confirm the intrinsic perceptual and motivational salience of paralinguistic information in spoken communication.
  • Zora, H., Tremblay, A. C., Gussenhoven, C., & Liu, F. (Eds.). (2023). Crosstalk between intonation and lexical tones: Linguistic, cognitive and neuroscience perspectives. Lausanne: Frontiers Media SA. doi:10.3389/978-2-8325-3301-7.
  • Zora, H., Wester, J. M., & Csépe, V. (2023). Predictions about prosody facilitate lexical access: Evidence from P50/N100 and MMN components. International Journal of Psychophysiology, 194: 112262. doi:10.1016/j.ijpsycho.2023.112262.

    Abstract

    Research into the neural foundation of perception asserts a model where top-down predictions modulate the bottom-up processing of sensory input. Despite becoming increasingly influential in cognitive neuroscience, the precise account of this predictive coding framework remains debated. In this study, we aim to contribute to this debate by investigating how predictions about prosody facilitate speech perception, and to shed light especially on lexical access influenced by simultaneous predictions in different domains, inter alia, prosodic and semantic. Using a passive auditory oddball paradigm, we examined neural responses to prosodic changes, leading to a semantic change as in Dutch nouns canon [ˈkaːnɔn] ‘cannon’ vs kanon [kaːˈnɔn] ‘canon’, and used acoustically identical pseudowords as controls. Results from twenty-eight native speakers of Dutch (age range 18–32 years) indicated an enhanced P50/N100 complex to prosodic change in pseudowords as well as an MMN response to both words and pseudowords. The enhanced P50/N100 response to pseudowords is claimed to indicate that all relevant auditory information is still processed by the brain, whereas the reduced response to words might reflect the suppression of information that has already been encoded. The MMN response to pseudowords and words, on the other hand, is best justified by the unification of previously established prosodic representations with sensory and semantic input respectively. This pattern of results is in line with the predictive coding framework acting on multiple levels and is of crucial importance to indicate that predictions about linguistic prosodic information are utilized by the brain as early as 50 ms.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2023). In conversation, answers are remembered better than the questions themselves. Journal of Experimental Psychology: Learning, Memory, and Cognition, 49(12), 1971-1988. doi:10.1037/xlm0001292.

    Abstract

    Language is used in communicative contexts to identify and successfully transmit new information that should be later remembered. In three studies, we used question–answer pairs, a naturalistic device for focusing information, to examine how properties of conversations inform later item memory. In Experiment 1, participants viewed three pictures while listening to a recorded question–answer exchange between two people about the locations of two of the displayed pictures. In a memory recognition test conducted online a day later, participants recognized the names of pictures that served as answers more accurately than the names of pictures that appeared as questions. This suggests that this type of focus indeed boosts memory. In Experiment 2, participants listened to the same items embedded in declarative sentences. There was a reduced memory benefit for the second item, confirming the role of linguistic focus on later memory beyond a simple serial-position effect. In Experiment 3, two participants asked and answered the same questions about objects in a dialogue. Here, answers continued to receive a memory benefit, and this focus effect was accentuated by language production such that information-seekers remembered the answers to their questions better than information-givers remembered the questions they had been asked. Combined, these studies show how people’s memory for conversation is modulated by the referential status of the items mentioned and by the speaker’s roles of the conversation participants.
  • De Zubicaray, G. I., Acheson, D. J., & Hartsuiker, R. J. (Eds.). (2013). Mind what you say - general and specific mechanisms for monitoring in speech production [Research topic] [Special Issue]. Frontiers in Human Neuroscience. Retrieved from http://www.frontiersin.org/human_neuroscience/researchtopics/mind_what_you_say_-_general_an/1197.

    Abstract

    Psycholinguistic research has typically portrayed speech production as a relatively automatic process. This is because when errors are made, they occur as seldom as one in every thousand words we utter. However, it has long been recognised that we need some form of control over what we are currently saying and what we plan to say. This capacity to both monitor our inner speech and self-correct our speech output has often been assumed to be a property of the language comprehension system. More recently, it has been demonstrated that speech production benefits from interfacing with more general cognitive processes such as selective attention, short-term memory (STM) and online response monitoring to resolve potential conflict and successfully produce the output of a verbal plan. The conditions and levels of representation according to which these more general planning, monitoring and control processes are engaged during speech production remain poorly understood. Moreover, there remains a paucity of information about their neural substrates, despite some of the first evidence of more general monitoring having come from electrophysiological studies of error related negativities (ERNs). While aphasic speech errors continue to be a rich source of information, there has been comparatively little research focus on instances of speech repair. The purpose of this Frontiers Research Topic is to provide a forum for researchers to contribute investigations employing behavioural, neuropsychological, electrophysiological, neuroimaging and virtual lesioning techniques. In addition, while the focus of the research topic is on novel findings, we welcome submission of computational simulations, review articles and methods papers.
  • Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Expression of multiple entities in Turkish Sign Language (TİD). In E. Arik (Ed.), Current Directions in Turkish Sign Language Research (pp. 272-302). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    This paper reports on an exploration of the ways in which multiple entities are expressed in Turkish Sign Language (TİD). The (descriptive and quantitative) analyses provided are based on a corpus of both spontaneous data and specifically elicited data, in order to provide as comprehensive an account as possible. We have found several devices in TİD for expression of multiple entities, in particular localization, spatial plural predicate inflection, and a specific form used to express multiple entities that are side by side in the same configuration (not reported for any other sign language to date), as well as numerals and quantifiers. In contrast to some other signed languages, TİD does not appear to have a productive system of plural reduplication. We argue that none of the devices encountered in the TİD data is a genuine plural marking device and that the plural interpretation of multiple entity localizations and plural predicate inflections is a by-product of the use of space to indicate the existence or the involvement in an event of multiple entities.

Share this page