Publications

Displaying 1001 - 1021 of 1021
  • Wittenburg, P. (2004). News from the Archive of the Max Planck Institute for Psycholinguistics. Language Archive Newsletter, 1(4), 12-12.
  • Wnuk, E., Verkerk, A., Levinson, S. C., & Majid, A. (2022). Color technology is not necessary for rich and efficient color language. Cognition, 229: 105223. doi:10.1016/j.cognition.2022.105223.

    Abstract

    The evolution of basic color terms in language is claimed to be stimulated by technological development, involving technological control of color or exposure to artificially colored objects. Accordingly, technologically “simple” non-industrialized societies are expected to have poor lexicalization of color, i.e., only rudimentary lexica of 2, 3 or 4 basic color terms, with unnamed gaps in the color space. While it may indeed be the case that technology stimulates lexical growth of color terms, it is sometimes considered a sine qua non for color salience and lexicalization. We provide novel evidence that this overlooks the role of the natural environment, and people's engagement with the environment, in the evolution of color vocabulary. We introduce the Maniq—nomadic hunter-gatherers with no color technology, but who have a basic color lexicon of 6 or 7 terms, thus of the same order as large languages like Vietnamese and Hausa, and who routinely talk about color. We examine color language in Maniq and compare it to available data in other languages to demonstrate it has remarkably high consensual color term usage, on a par with English, and high coding efficiency. This shows colors can matter even for non-industrialized societies, suggesting technology is not necessary for color language. Instead, factors such as perceptual prominence of color in natural environments, its practical usefulness across communicative contexts, and symbolic importance can all stimulate elaboration of color language.
  • Wolna, A., Szewczyk, J., Diaz, M., Domagalik, A., Szwed, M., & Wodniecka, Z. (2024). Domain-general and language-specific contributions to speech production in a second language: An fMRI study using functional localizers. Scientific Reports, 14: 57. doi:10.1038/s41598-023-49375-9.

    Abstract

    For bilinguals, speaking in a second language (L2) compared to the native language (L1) is usually more difficult. In this study we asked whether the difficulty in L2 production reflects increased demands imposed on domain-general or core language mechanisms. We compared the brain response to speech production in L1 and L2 within two functionally-defined networks in the brain: the Multiple Demand (MD) network and the language network. We found that speech production in L2 was linked to a widespread increase of brain activity in the domain-general MD network. The language network did not show a similarly robust differences in processing speech in the two languages, however, we found increased response to L2 production in the language-specific portion of the left inferior frontal gyrus (IFG). To further explore our results, we have looked at domain-general and language-specific response within the brain structures postulated to form a Bilingual Language Control (BLC) network. Within this network, we found a robust increase in response to L2 in the domain-general, but also in some language-specific voxels including in the left IFG. Our findings show that L2 production strongly engages domain-general mechanisms, but only affects language sensitive portions of the left IFG. These results put constraints on the current model of bilingual language control by precisely disentangling the domain-general and language-specific contributions to the difficulty in speech production in L2.

    Additional information

    supplementary materials
  • Wolna, A., Szewczyk, J., Diaz, M., Domagalik, A., Szwed, M., & Wodniecka, Z. (2024). Tracking components of bilingual language control in speech production: An fMRI study using functional localizers. Neurobiology of Language, 5(2), 315-340. doi:10.1162/nol_a_00128.

    Abstract

    When bilingual speakers switch back to speaking in their native language (L1) after having used their second language (L2), they often experience difficulty in retrieving words in their L1. This phenomenon is referred to as the L2 after-effect. We used the L2 after-effect as a lens to explore the neural bases of bilingual language control mechanisms. Our goal was twofold: first, to explore whether bilingual language control draws on domain-general or language-specific mechanisms; second, to investigate the precise mechanism(s) that drive the L2 after-effect. We used a precision fMRI approach based on functional localizers to measure the extent to which the brain activity that reflects the L2 after-effect overlaps with the language network (Fedorenko et al., 2010) and the domain-general multiple demand network (Duncan, 2010), as well as three task-specific networks that tap into interference resolution, lexical retrieval, and articulation. Forty-two Polish–English bilinguals participated in the study. Our results show that the L2 after-effect reflects increased engagement of domain-general but not language-specific resources. Furthermore, contrary to previously proposed interpretations, we did not find evidence that the effect reflects increased difficulty related to lexical access, articulation, and the resolution of lexical interference. We propose that difficulty of speech production in the picture naming paradigm—manifested as the L2 after-effect—reflects interference at a nonlinguistic level of task schemas or a general increase of cognitive control engagement during speech production in L1 after L2.

    Additional information

    supplementary materials
  • Wolters, G., & Poletiek, F. H. (2008). Beslissen over aangiftes van seksueel misbruik bij kinderen. De Psycholoog, 43, 29-29.
  • Li, X., Yang, Y., & Hagoort, P. (2008). Pitch accent and lexical tone processing in Chinese discourse comprehension: An ERP study. Brain Research, 1222, 192-200. doi:10.1016/j.brainres.2008.05.031.

    Abstract

    In the present study, event-related brain potentials (ERP) were recorded to investigate the role of pitch accent and lexical tone in spoken discourse comprehension. Chinese was used as material to explore the potential difference in the nature and time course of brain responses to sentence meaning as indicated by pitch accent and to lexical meaning as indicated by tone. In both cases, the pitch contour of critical words was varied. The results showed that both inconsistent pitch accent and inconsistent lexical tone yielded N400 effects, and there was no interaction between them. The negativity evoked by inconsistent pitch accent had the some topography as that evoked by inconsistent lexical tone violation, with a maximum over central–parietal electrodes. Furthermore, the effect for the combined violations was the sum of effects for pure pitch accent and pure lexical tone violation. However, the effect for the lexical tone violation appeared approximately 90 ms earlier than the effect of the pitch accent violation. It is suggested that there might be a correspondence between the neural mechanism underlying pitch accent and lexical meaning processing in context. They both reflect the integration of the current information into a discourse context, independent of whether the current information was sentence meaning indicated by accentuation, or lexical meaning indicated by tone. In addition, lexical meaning was processed earlier than sentence meaning conveyed by pitch accent during spoken language processing.
  • Yang, J., Van den Bosch, A., & Frank, S. L. (2022). Unsupervised text segmentation predicts eye fixations during reading. Frontiers in Artificial Intelligence, 5: 731615. doi:10.3389/frai.2022.731615.

    Abstract

    Words typically form the basis of psycholinguistic and computational linguistic studies about sentence processing. However, recent evidence shows the basic units during reading, i.e., the items in the mental lexicon, are not always words, but could also be sub-word and supra-word units. To recognize these units, human readers require a cognitive mechanism to learn and detect them. In this paper, we assume eye fixations during reading reveal the locations of the cognitive units, and that the cognitive units are analogous with the text units discovered by unsupervised segmentation models. We predict eye fixations by model-segmented units on both English and Dutch text. The results show the model-segmented units predict eye fixations better than word units. This finding suggests that the predictive performance of model-segmented units indicates their plausibility as cognitive units. The Less-is-Better (LiB) model, which finds the units that minimize both long-term and working memory load, offers advantages both in terms of prediction score and efficiency among alternative models. Our results also suggest that modeling the least-effort principle for the management of long-term and working memory can lead to inferring cognitive units. Overall, the study supports the theory that the mental lexicon stores not only words but also smaller and larger units, suggests that fixation locations during reading depend on these units, and shows that unsupervised segmentation models can discover these units.
  • Zeller, J., Bylund, E., & Lewis, A. G. (2022). The parser consults the lexicon in spite of transparent gender marking: EEG evidence from noun class agreement processing in Zulu. Cognition, 226: 105148. doi:10.1016/j.cognition.2022.105148.

    Abstract

    In sentence comprehension, the parser in many languages has the option to use both the morphological form of a noun and its lexical representation when evaluating agreement. The additional step of consulting the lexicon incurs processing costs, and an important question is whether the parser takes that step even when the formal cues alone are sufficiently reliable to evaluate agreement. Our study addressed this question using electrophysiology in Zulu, a language where both grammatical gender and number features are reliably expressed formally by noun class prefixes, but only gender features are lexically specified. We observed reduced, more topographically focal LAN, and more frontally distributed alpha/beta power effects for gender compared to number agreement violations. These differences provide evidence that for gender mismatches, even though the formal cues are reliable, the parser nevertheless takes the additional step of consulting the noun's lexical representation, a step which is not available for number.

    Files private

    Request files
  • Zeshan, U. (2004). Interrogative constructions in sign languages - Cross-linguistic perspectives. Language, 80(1), 7-39.

    Abstract

    This article reports on results from a broad crosslinguistic study based on data from thirty-five signed languages around the world. The study is the first of its kind, and the typological generalizations presented here cover the domain of interrogative structures as they appear across a wide range of geographically and genetically distinct signed languages. Manual and nonmanual ways of marking basic types of questions in signed languages are investigated. As a result, it becomes clear that the range of crosslinguistic variation is extensive for some subparameters, such as the structure of question-word paradigms, while other parameters, such as the use of nonmanual expressions in questions, show more similarities across signed languages. Finally, it is instructive to compare the findings from signed language typology to relevant data from spoken languages at a more abstract, crossmodality level.
  • Zeshan, U., Vasishta, M. N., & Sethna, M. (2005). Implementation of Indian Sign Language in educational settings. Asia Pacific Disability Rehabilitation Journal, 16(1), 16-40.

    Abstract

    This article reports on several sub-projects of research and development related to the use of Indian Sign Language in educational settings. In many countries around the world, sign languages are now recognised as the legitimate, full-fledged languages of the deaf communities that use them. In India, the development of sign language resources and their application in educational contexts, is still in its initial stages. The work reported on here, is the first principled and comprehensive effort of establishing educational programmes in Indian Sign Language at a national level. Programmes are of several types: a) Indian Sign Language instruction for hearing people; b) sign language teacher training programmes for deaf people; and c) educational materials for use in schools for the Deaf. The conceptual approach used in the programmes for deaf students is known as bilingual education, which emphasises the acquisition of a first language, Indian Sign Language, alongside the acquisition of spoken languages, primarily in their written form.
  • Zeshan, U. (2004). Hand, head and face - negative constructions in sign languages. Linguistic Typology, 8(1), 1-58. doi:10.1515/lity.2004.003.

    Abstract

    This article presents a typology of negative constructions across a substantial number of sign languages from around the globe. After situating the topic within the wider context of linguistic typology, the main negation strategies found across sign languages are described. Nonmanual negation includes the use of head movements and facial expressions for negation and is of great importance in sign languages as well as particularly interesting from a typological point of view. As far as manual signs are concerned, independent negative particles represent the dominant strategy, but there are also instances of irregular negation in most sign languages. Irregular negatives may take the form of suppletion, cliticisation, affixing, or internal modification of a sign. The results of the study lead to interesting generalisations about similarities and differences between negatives in signed and spoken languages.
  • Zettersten, M., Cox, C., Bergmann, C., Tsui, A. S. M., Soderstrom, M., Mayor, J., Lundwall, R. A., Lewis, M., Kosie, J. E., Kartushina, N., Fusaroli, R., Frank, M. C., Byers-Heinlein, K., Black, A. K., & Mathur, M. B. (2024). Evidence for infant-directed speech preference is consistent across large-scale, multi-site replication and meta-analysis. Open Mind, 8, 439-461. doi:10.1162/opmi_a_00134.

    Abstract

    There is substantial evidence that infants prefer infant-directed speech (IDS) to adult-directed speech (ADS). The strongest evidence for this claim has come from two large-scale investigations: i) a community-augmented meta-analysis of published behavioral studies and ii) a large-scale multi-lab replication study. In this paper, we aim to improve our understanding of the IDS preference and its boundary conditions by combining and comparing these two data sources across key population and design characteristics of the underlying studies. Our analyses reveal that both the meta-analysis and multi-lab replication show moderate effect sizes (d ≈ 0.35 for each estimate) and that both of these effects persist when relevant study-level moderators are added to the models (i.e., experimental methods, infant ages, and native languages). However, while the overall effect size estimates were similar, the two sources diverged in the effects of key moderators: both infant age and experimental method predicted IDS preference in the multi-lab replication study, but showed no effect in the meta-analysis. These results demonstrate that the IDS preference generalizes across a variety of experimental conditions and sampling characteristics, while simultaneously identifying key differences in the empirical picture offered by each source individually and pinpointing areas where substantial uncertainty remains about the influence of theoretically central moderators on IDS preference. Overall, our results show how meta-analyses and multi-lab replications can be used in tandem to understand the robustness and generalizability of developmental phenomena.

    Additional information

    supplementary data link to preprint
  • Zhang, Q., Zhou, Y., & Lou, H. (2022). The dissociation between age of acquisition and word frequency effects in Chinese spoken picture naming. Psychological Research, 86, 1918-1929. doi:10.1007/s00426-021-01616-0.

    Abstract

    This study aimed to examine the locus of age of acquisition (AoA) and word frequency (WF) effects in Chinese spoken picture naming, using a picture–word interference task. We conducted four experiments manipulating the properties of picture names (AoA in Experiments 1 and 2, while controlling WF; and WF in Experiments 3 and 4, while controlling AoA), and the relations between distractors and targets (semantic or phonological relatedness). Both Experiments 1 and 2 demonstrated AoA effects in picture naming; pictures of early acquired concepts were named faster than those acquired later. There was an interaction between AoA and semantic relatedness, but not between AoA and phonological relatedness, suggesting localisation of AoA effects at the stage of lexical access in picture naming. Experiments 3 and 4 demonstrated WF effects: pictures of high-frequency concepts were named faster than those of low-frequency concepts. WF interacted with both phonological and semantic relatedness, suggesting localisation of WF effects at multiple levels of picture naming, including lexical access and phonological encoding. Our findings show that AoA and WF effects exist in Chinese spoken word production and may arise at related processes of lexical selection.
  • Zhang, J., Bao, S., Furumai, R., Kucera, K. S., Ali, A., Dean, N. M., & Wang, X.-F. (2005). Protein phosphatase 5 is required for ATR-mediated checkpoint activation. Molecular and Cellular Biology, 25, 9910-9919. doi:10.1128/​MCB.25.22.9910-9919.2005.

    Abstract

    In response to DNA damage or replication stress, the protein kinase ATR is activated and subsequently transduces genotoxic signals to cell cycle control and DNA repair machinery through phosphorylation of a number of downstream substrates. Very little is known about the molecular mechanism by which ATR is activated in response to genotoxic insults. In this report, we demonstrate that protein phosphatase 5 (PP5) is required for the ATR-mediated checkpoint activation. PP5 forms a complex with ATR in a genotoxic stress-inducible manner. Interference with the expression or the activity of PP5 leads to impairment of the ATR-mediated phosphorylation of hRad17 and Chk1 after UV or hydroxyurea treatment. Similar results are obtained in ATM-deficient cells, suggesting that the observed defect in checkpoint signaling is the consequence of impaired functional interaction between ATR and PP5. In cells exposed to UV irradiation, PP5 is required to elicit an appropriate S-phase checkpoint response. In addition, loss of PP5 leads to premature mitosis after hydroxyurea treatment. Interestingly, reduced PP5 activity exerts differential effects on the formation of intranuclear foci by ATR and replication protein A, implicating a functional role for PP5 in a specific stage of the checkpoint signaling pathway. Taken together, our results suggest that PP5 plays a critical role in the ATR-mediated checkpoint activation.
  • Wu, S., Zhang, D., Li, X., Zhao, J., Sun, X., Shi, L., Mao, Y., Zhang, Y., & Jiang, F. (2022). Siblings and Early Childhood Development: Evidence from a Population-Based Cohort in Preschoolers from Shanghai. International Journal of Environmental Research and Public Health, 19(9): 5739. doi:10.3390/ijerph19095739.

    Abstract

    Background: The current study aims to investigate the association between the presence of a sibling and early childhood development (ECD). (2) Methods: Data were obtained from a large-scale population-based cohort in Shanghai. Children were followed from three to six years old. Based on birth order, the sample was divided into four groups: single child, younger child, elder child, and single-elder transfer (transfer from single-child to elder-child). Psychosocial well-being and school readiness were assessed with the total difficulties score from the Strengths and Difficulties Questionnaire (SDQ) and the overall development score from the early Human Capability Index (eHCI), respectively. A multilevel model was conducted to evaluate the main effect of each sibling group and the group × age interaction effect on psychosocial well-being and school readiness. (3) Results: Across all measures, children in the younger child group presented with lower psychosocial problems (β = −0.96, 95% CI: −1.44, −0.48, p < 0.001) and higher school readiness scores (β = 1.56, 95% CI: 0.61, 2.51, p = 0.001). No significant difference, or marginally significant difference, was found between the elder group and the single-child group. Compared to the single-child group, the single-elder transfer group presented with slower development on both psychosocial well-being (Age × Group: β = 0.37, 95% CI: 0.18, 0.56, p < 0.001) and school readiness (Age × Group: β = −0.75, 95% CI: −1.10, −0.40, p < 0.001). The sibling-ECD effects did not differ between children from families of low versus high socioeconomic status. (4) Conclusion: The current study suggested the presence of a sibling was not associated with worse development outcomes in general. Rather, children with an elder sibling are more likely to present with better ECD.
  • Zhao, J., Yu, Z., Sun, X., Wu, S., Zhang, J., Zhang, D., Zhang, Y., & Jiang, F. (2022). Association between screen time trajectory and early childhood development in children in China. JAMA Pediatrics, 176(8), 768-775. doi:10.1001/jamapediatrics.2022.1630.

    Abstract

    Importance: Screen time has become an integral part of children's daily lives. Nevertheless, the developmental consequences of screen exposure in young children remain unclear.

    Objective: To investigate the screen time trajectory from 6 to 72 months of age and its association with children's development at age 72 months in a prospective birth cohort.

    Design, setting, and participants: Women in Shanghai, China, who were at 34 to 36 gestational weeks and had an expected delivery date between May 2012 and July 2013 were recruited for this cohort study. Their children were followed up at 6, 9, 12, 18, 24, 36, 48, and 72 months of age. Children's screen time was classified into 3 groups at age 6 months: continued low (ie, stable amount of screen time), late increasing (ie, sharp increase in screen time at age 36 months), and early increasing (ie, large amount of screen time in early stages that remained stable after age 36 months). Cognitive development was assessed by specially trained research staff in a research clinic. Of 262 eligible mother-offspring pairs, 152 dyads had complete data regarding all variables of interest and were included in the analyses. Data were analyzed from September 2019 to November 2021.

    Exposures: Mothers reported screen times of children at 6, 9, 12, 18, 24, 36, 48, and 72 months of age.

    Main outcomes and measures: The cognitive development of children was evaluated using the Wechsler Intelligence Scale for Children, 4th edition, at age 72 months. Social-emotional development was measured by the Strengths and Difficulties Questionnaire, which was completed by the child's mother. The study described demographic characteristics, maternal mental health, child's temperament at age 6 months, and mental development at age 12 months by subgroups clustered by a group-based trajectory model. Group difference was examined by analysis of variance.

    Results: A total of 152 mother-offspring dyads were included in this study, including 77 girls (50.7%) and 75 boys (49.3%) (mean [SD] age of the mothers was 29.7 [3.3] years). Children's screen time trajectory from age 6 to 72 months was classified into 3 groups: continued low (110 [72.4%]), late increasing (17 [11.2%]), and early increasing (25 [16.4%]). Compared with the continued low group, the late increasing group had lower scores on the Full-Scale Intelligence Quotient (β coefficient, -8.23; 95% CI, -15.16 to -1.30; P < .05) and the General Ability Index (β coefficient, -6.42; 95% CI, -13.70 to 0.86; P = .08); the early increasing group presented with lower scores on the Full-Scale Intelligence Quotient (β coefficient, -6.68; 95% CI, -12.35 to -1.02; P < .05) and the Cognitive Proficiency Index (β coefficient, -10.56; 95% CI, -17.23 to -3.90; P < .01) and a higher total difficulties score (β coefficient, 2.62; 95% CI, 0.49-4.76; P < .05).

    Conclusions and relevance: This cohort study found that excessive screen time in early years was associated with poor cognitive and social-emotional development. This finding may be helpful in encouraging awareness among parents of the importance of onset and duration of children's screen time.
  • Zhou, H., Van der Ham, S., De Boer, B., Bogaerts, L., & Raviv, L. (2024). Modality and stimulus effects on distributional statistical learning: Sound vs. sight, time vs. space. Journal of Memory and Language, 138: 104531. doi:10.1016/j.jml.2024.104531.

    Abstract

    Statistical learning (SL) is postulated to play an important role in the process of language acquisition as well as in other cognitive functions. It was found to enable learning of various types of statistical patterns across different sensory modalities. However, few studies have distinguished distributional SL (DSL) from sequential and spatial SL, or examined DSL across modalities using comparable tasks. Considering the relevance of such findings to the nature of SL, the current study investigated the modality- and stimulus-specificity of DSL. Using a within-subject design we compared DSL performance in auditory and visual modalities. For each sensory modality, two stimulus types were used: linguistic versus non-linguistic auditory stimuli and temporal versus spatial visual stimuli. In each condition, participants were exposed to stimuli that varied in their length as they were drawn from two categories (short versus long). DSL was assessed using a categorization task and a production task. Results showed that learners’ performance was only correlated for tasks in the same sensory modality. Moreover, participants were better at categorizing the temporal signals in the auditory conditions than in the visual condition, where in turn an advantage of the spatial condition was observed. In the production task participants exaggerated signal length more for linguistic signals than non-linguistic signals. Together, these findings suggest that DSL is modality- and stimulus-sensitive.

    Additional information

    link to preprint
  • Zimianiti, E. (2022). Is semantic memory the winning component in second language teaching with Accelerative Integrated Method (AIM)? LingUU Journal, 6(1), 54-62.

    Abstract

    This paper constitutes a research proposal based on Rousse-Malpalt’s
    (2019) dissertation, which extensively examines the effectiveness of the
    Accelerative Integrated Method (AIM) in second language (L2) learning.
    Although it has been found that AIM is a greatly effective method in comparison with non-implicit teaching methods, the reasons behind its success and effectiveness are yet unknown. As Semantic Memory (SM) is the component of memory responsible for the conceptualization and storage of knowledge, this paper sets to propose an investigation of its role in the learning process of AIM and provide with insights as to why the embodied experience of learning with AIM is more effective than others. The tasks proposed for administration take into account the factors of gestures being related to a learner’s memorization process and Semantic Memory. Lastly, this paper provides with a future research idea about the learning mechanisms of sign languages in people with hearing deficits and healthy population, aiming to indicate which brain mechanisms benefit from the teaching method of AIM and reveal important brain functions for SLA via AIM.
  • Zioga, I., Zhou, Y. J., Weissbart, H., Martin, A. E., & Haegens, S. (2024). Alpha and beta oscillations differentially support word production in a rule-switching task. eNeuro, 11(4): ENEURO.0312-23.2024. doi:10.1523/ENEURO.0312-23.2024.

    Abstract

    Research into the role of brain oscillations in basic perceptual and cognitive functions has suggested that the alpha rhythm reflects functional inhibition while the beta rhythm reflects neural ensemble (re)activation. However, little is known regarding the generalization of these proposed fundamental operations to linguistic processes, such as speech comprehension and production. Here, we recorded magnetoencephalography in participants performing a novel rule-switching paradigm. Specifically, Dutch native speakers had to produce an alternative exemplar from the same category or a feature of a given target word embedded in spoken sentences (e.g., for the word “tuna”, an exemplar from the same category—“seafood”—would be “shrimp”, and a feature would be “pink”). A cue indicated the task rule—exemplar or feature—either before (pre-cue) or after (retro-cue) listening to the sentence. Alpha power during the working memory delay was lower for retro-cue compared with that for pre-cue in the left hemispheric language-related regions. Critically, alpha power negatively correlated with reaction times, suggestive of alpha facilitating task performance by regulating inhibition in regions linked to lexical retrieval. Furthermore, we observed a different spatiotemporal pattern of beta activity for exemplars versus features in the right temporoparietal regions, in line with the proposed role of beta in recruiting neural networks for the encoding of distinct categories. Overall, our study provides evidence for the generalizability of the role of alpha and beta oscillations from perceptual to more “complex, linguistic processes” and offers a novel task to investigate links between rule-switching, working memory, and word production.
  • Zora, H., Gussenhoven, C., Tremblay, A., & Liu, F. (2022). Editorial: Crosstalk between intonation and lexical tones: Linguistic, cognitive and neuroscience perspectives. Frontiers in Psychology, 13: 1101499. doi:10.3389/fpsyg.2022.1101499.

    Abstract

    The interplay between categorical and continuous aspects of the speech signal remains central and yet controversial in the fields of phonetics and phonology. The division between phonological abstractions and phonetic variations has been particularly relevant to the unraveling of diverse communicative functions of pitch in the domain of prosody. Pitch influences vocal communication in two major but fundamentally different ways, and lexical and intonational tones exquisitely capture these functions. Lexical tone contrasts convey lexical meanings as well as derivational meanings at the word level and are grammatically encoded as discrete structures. Intonational tones, on the other hand, signal post-lexical meanings at the phrasal level and typically allow gradient pragmatic variations. Since categorical and gradient uses of pitch are ubiquitous and closely intertwined in their physiological and psychological processes, further research is warranted for a more detailed understanding of their structural and functional characterisations. This Research Topic addresses this matter from a wide range of perspectives, including first and second language acquisition, speech production and perception, structural and functional diversity, and working with distinct languages and experimental measures. In the following, we provide a short overview of the contributions submitted to this topic

    Additional information

    also published as book chapter (2023)
  • Zwitserlood, I. (2008). Grammatica-vertaalmethode en nederlandse gebarentaal. Levende Talen Magazine, 95(5), 28-29.

Share this page