Publications

Displaying 301 - 400 of 983
  • Hammarström, H. (2014). [Review of the book A grammar of the great Andamanese language: An ethnolinguistic study by Anvita Abbi]. Journal of South Asian Languages and Linguistics, 1, 111-116. doi:10.1515/jsall-2014-0007.
  • Hammarström, H., & Nordhoff, S. (2011). LangDoc: Bibliographic infrastructure for linguistic typology. Oslo Studies in Language, 3(2), 31-43. Retrieved from https://www.journals.uio.no/index.php/osla/article/view/75.

    Abstract

    The present paper describes the ongoing project LangDoc to make a bibliography website for linguistic typology, with a near-complete database of references to documents that contain descriptive data on the languages of the world. This is intended to provide typologists with a more precise and comprehensive way to search for information on languages, and for the specific kind information that they are interested in. The annotation scheme devised is a trade-off between annotation effort and search desiderata. The end goal is a website with browse, search, update, new items subscription and download facilities, which can hopefully be enriched by spontaneous collaborative efforts.
  • Hammarström, H., & Borin, L. (2011). Unsupervised learning of morphology. Computational Linguistics, 37(2), 309-350. doi:10.1162/COLI_a_00050.

    Abstract

    This article surveys work on Unsupervised Learning of Morphology. We define Unsupervised Learning of Morphology as the problem of inducing a description (of some kind, even if only morpheme segmentation) of how orthographic words are built up given only raw text data of a language. We briefly go through the history and motivation of this problem. Next, over 200 items of work are listed with a brief characterization, and the most important ideas in the field are critically discussed. We summarize the achievements so far and give pointers for future developments.
  • Hammond, J. (2011). JVC GY-HM100U HD video camera and FFmpeg libraries [Technology review]. Language Documentation and Conservation, 5, 69-80.
  • Hanulikova, A., Mitterer, H., & McQueen, J. M. (2011). Effects of first and second language on segmentation of non-native speech. Bilingualism: Language and Cognition, 14, 506-521. doi:10.1017/S1366728910000428.

    Abstract

    We examined whether Slovak-German bilinguals apply native Slovak phonological and lexical knowledge when segmenting German speech. When Slovaks listen to their native language (Hanulíková, McQueen, & Mitterer, 2010), segmentation is impaired when fixed-stress cues are absent, and, following the Possible-Word Constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997), lexical candidates are disfavored if segmentation leads to vowelless residues, unless those residues are existing Slovak words. In the present study, fixed-stress cues on German target words were again absent. Nevertheless, in support of the PWC, both German and Slovak listeners recognized German words (e.g., Rose "rose") faster in syllable contexts (suckrose) than in single- onsonant contexts (krose, trose). But only the Slovak listeners recognized Rose, for example, faster in krose than in trose (k is a Slovak word, t is not). It appears that non-native listeners can suppress native stress segmentation procedures, but that they suffer from prevailing interference from native lexical knowledge
  • Hanulová, J., Davidson, D. J., & Indefrey, P. (2011). Where does the delay in L2 picture naming come from? Psycholinguistic and neurocognitive evidence on second language word production. Language and Cognitive Processes, 26, 902-934. doi:10.1080/01690965.2010.509946.

    Abstract

    Bilinguals are slower when naming a picture in their second language than when naming it in their first language. Although the phenomenon has been frequently replicated, it is not known what causes the delay in the second language. In this article we discuss at what processing stages a delay might arise according to current models of bilingual processing and how the available behavioural and neurocognitive evidence relates to these proposals. Suggested plausible mechanisms, such as frequency or interference effects, are compatible with a naming delay arising at different processing stages. Haemodynamic and electrophysiological data seem to point to a postlexical stage but are still too scarce to support a definite conclusion.
  • Harmon, Z., & Kapatsinski, V. (2021). A theory of repetition and retrieval in language production. Psychological Review, 128, 1112-1144. doi:10.1037/rev0000305.

    Abstract

    Repetition appears to be part of error correction and action preparation in all domains that involve producing an action sequence. The present work contends that the ubiquity of repetition is due to its role in resolving a problem inherent to planning and retrieval of action sequences: the Problem of Retrieval. Repetitions occur when the production to perform next is not activated enough to be executed. Repetitions are helpful in this situation because the repeated action sequence activates the likely continuation. We model a corpus of natural speech using a recurrent network, with words as units of production. We show that repeated material makes upcoming words more predictable, especially when more than one word is repeated. Speakers are argued to produce multiword repetitions by using backward associations to reactivate recently produced words. The existence of multiword repetitions means that speakers must decide where to reinitiate execution from. We show that production restarts from words that have seldom occurred in a predictive preceding-word context and have often occurred utterance-initially. These results are explained by competition between preceding-context and top-down cues over the course of language learning. The proposed theory improves on structural accounts of repetition disfluencies, and integrates repetition disfluencies in language production with repetitions observed in other domains of skilled action.
  • Hartsuiker, R. J., Huettig, F., & Olivers, C. N. (Eds.). (2011). Visual search and visual world: Interactions among visual attention, language, and working memory [Special Issue]. Acta Psychologica, 137(2). doi:10.1016/j.actpsy.2011.01.005.
  • Hartsuiker, R. J., Huettig, F., & Olivers, C. N. (2011). Visual search and visual world: Interactions among visual attention, language, and working memory (introduction to the special issue). Acta Psychologica, 137(2), 135-137. doi:10.1016/j.actpsy.2011.01.005.
  • Hartung, F., Wang, Y., Mak, M., Willems, R. M., & Chatterjee, A. (2021). Aesthetic appraisals of literary style and emotional intensity in narrative engagement are neurally dissociable. Communications Biology, 4: 1401. doi:10.1038/s42003-021-02926-0.

    Abstract

    Humans are deeply affected by stories, yet it is unclear how. In this study, we explored two aspects of aesthetic experiences during narrative engagement - literariness and narrative fluctuations in appraised emotional intensity. Independent ratings of literariness and emotional intensity of two literary stories were used to predict blood-oxygen-level-dependent signal changes in 52 listeners from an existing fMRI dataset. Literariness was associated with increased activation in brain areas linked to semantic integration (left angular gyrus, supramarginal gyrus, and precuneus), and decreased activation in bilateral middle temporal cortices, associated with semantic representations and word memory. Emotional intensity correlated with decreased activation in a bilateral frontoparietal network that is often associated with controlled attention. Our results confirm a neural dissociation in processing literary form and emotional content in stories and generate new questions about the function of and interaction between attention, social cognition, and semantic systems during literary engagement and aesthetic experiences.
  • Haun, D. B. M. (2003). What's so special about spatial cognition. De Psychonoom, 18, 3-4.
  • Haun, D. B. M., Rekers, Y., & Tomasello, M. (2014). Children conform the behavior of peers; Other great apes stick with what they know. Psychological Science, 25, 2160-2167. doi:10.1177/0956797614553235.

    Abstract

    All primates learn things from conspecifics socially, but it is not clear whether they conform to the behavior of these conspecifics—if conformity is defined as overriding individually acquired behavioral tendencies in order to copy peers’ behavior. In the current study, chimpanzees, orangutans, and 2-year-old human children individually acquired a problem-solving strategy. They then watched several conspecific peers demonstrate an alternative strategy. The children switched to this new, socially demonstrated strategy in roughly half of all instances, whereas the other two great-ape species almost never adjusted their behavior to the majority’s. In a follow-up study, children switched much more when the peer demonstrators were still present than when they were absent, which suggests that their conformity arose at least in part from social motivations. These results demonstrate an important difference between the social learning of humans and great apes, a difference that might help to account for differences in human and nonhuman cultures

    Additional information

    Haun_Rekers_Tomasello_2014_supp.pdf
  • Haun, D. B. M., & Tomasello, M. (2011). Conformity to peer pressure in preschool children. Child Development, 82, 1759-1767. doi:10.1111/j.1467-8624.2011.01666.x.

    Abstract

    Both adults and adolescents often conform their behavior and opinions to peer groups, even when they themselves know better. The current study investigated this phenomenon in 24 groups of 4 children between 4;2 and 4;9 years of age. Children often made their judgments conform to those of 3 peers, who had made obviously erroneous but unanimous public judgments right before them. A follow-up study with 18 groups of 4 children between 4;0 and 4;6 years of age revealed that children did not change their “real” judgment of the situation, but only their public expression of it. Preschool children are subject to peer pressure, indicating sensitivity to peers as a primary social reference group already during the preschool years.
  • Haun, D. B. M. (2011). Memory for body movements in Namibian hunter-gatherer children. Journal of Cognitive Education and Psychology, 10, 56-62.

    Abstract

    Despite the global universality of physical space, different cultural groups vary substantially as to how they memorize it. Although European participants mostly prefer egocentric strategies (“left, right, front, back”) to memorize spatial relations, others use mostly allocentric strategies (“north, south, east, west”). Prior research has shown that some cultures show a general preference to memorize object locations and even also body movements in relation to the larger environment rather than in relation to their own body. Here, we investigate whether this cultural bias also applies to movements specifically directed at the participants' own body, emphasizing the role of ego. We show that even participants with generally allocentric biases preferentially memorize self-directed movements using egocentric spatial strategies. These results demonstrate an intricate system of interacting cultural biases and momentary situational characteristics.
  • Haun, D. B. M., Nawroth, C., & Call, J. (2011). Great apes’ risk-taking strategies in a decision making task. PLoS One, 6(12), e28801. doi:10.1371/journal.pone.0028801.

    Abstract

    We investigate decision-making behaviour in all four non-human great ape species. Apes chose between a safe and a risky option across trials of varying expected values. All species chose the safe option more often with decreasing probability of success. While all species were risk-seeking, orangutans and chimpanzees chose the risky option more often than gorillas and bonobos. Hence all four species' preferences were ordered in a manner consistent with normative dictates of expected value, but varied predictably in their willingness to take risks.
  • Haun, D. B. M., Rapold, C. J., Janzen, G., & Levinson, S. C. (2011). Plasticity of human spatial memory: Spatial language and cognition covary across cultures. Cognition, 119, 70-80. doi:10.1016/j.cognition.2010.12.009.

    Abstract

    The present paper explores cross-cultural variation in spatial cognition by comparing spatial reconstruction tasks by Dutch and Namibian elementary school children. These two communities differ in the way they predominantly express spatial relations in language. Four experiments investigate cognitive strategy preferences across different levels of task-complexity and instruction. Data show a correlation between dominant linguistic spatial frames of reference and performance patterns in non-linguistic spatial memory tasks. This correlation is shown to be stable across an increase of complexity in the spatial array. When instructed to use their respective non-habitual cognitive strategy, participants were not easily able to switch between strategies and their attempts to do so impaired their performance. These results indicate a difference not only in preference but also in competence and suggest that spatial language and non-linguistic preferences and competences in spatial cognition are systematically aligned across human populations.

    Files private

    Request files
  • Hayano, K. (2003). Self-presentation as a face-threatening act: A comparative study of self-oriented topic introduction in English and Japanese. Veritas, 24, 45-58.
  • Healthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M. and 67 moreHealthy Brain Study Consortium, Aarts, E., Akkerman, A., Altgassen, M., Bartels, R., Beckers, D., Bevelander, K., Bijleveld, E., Blaney Davidson, E., Boleij, A., Bralten, J., Cillessen, T., Claassen, J., Cools, R., Cornelissen, I., Dresler, M., Eijsvogels, T., Faber, M., Fernández, G., Figner, B., Fritsche, M., Füllbrunn, S., Gayet, S., Van Gelder, M. M. H. J., Van Gerven, M., Geurts, S., Greven, C. U., Groefsema, M., Haak, K., Hagoort, P., Hartman, Y., Van der Heijden, B., Hermans, E., Heuvelmans, V., Hintz, F., Den Hollander, J., Hulsman, A. M., Idesis, S., Jaeger, M., Janse, E., Janzing, J., Kessels, R. P. C., Karremans, J. C., De Kleijn, W., Klein, M., Klumpers, F., Kohn, N., Korzilius, H., Krahmer, B., De Lange, F., Van Leeuwen, J., Liu, H., Luijten, M., Manders, P., Manevska, K., Marques, J. P., Matthews, J., McQueen, J. M., Medendorp, P., Melis, R., Meyer, A. S., Oosterman, J., Overbeek, L., Peelen, M., Popma, J., Postma, G., Roelofs, K., Van Rossenberg, Y. G. T., Schaap, G., Scheepers, P., Selen, L., Starren, M., Swinkels, D. W., Tendolkar, I., Thijssen, D., Timmerman, H., Tutunji, R., Tuladhar, A., Veling, H., Verhagen, M., Verkroost, J., Vink, J., Vriezekolk, V., Vrijsen, J., Vyrastekova, J., Van der Wal, S., Willems, R. M., & Willemsen, A. (2021). Protocol of the Healthy Brain Study: An accessible resource for understanding the human brain and how it dynamically and individually operates in its bio-social context. PLoS One, 16(12): e0260952. doi:10.1371/journal.pone.0260952.

    Abstract

    The endeavor to understand the human brain has seen more progress in the last few decades than in the previous two millennia. Still, our understanding of how the human brain relates to behavior in the real world and how this link is modulated by biological, social, and environmental factors is limited. To address this, we designed the Healthy Brain Study (HBS), an interdisciplinary, longitudinal, cohort study based on multidimensional, dynamic assessments in both the laboratory and the real world. Here, we describe the rationale and design of the currently ongoing HBS. The HBS is examining a population-based sample of 1,000 healthy participants (age 30-39) who are thoroughly studied across an entire year. Data are collected through cognitive, affective, behavioral, and physiological testing, neuroimaging, bio-sampling, questionnaires, ecological momentary assessment, and real-world assessments using wearable devices. These data will become an accessible resource for the scientific community enabling the next step in understanding the human brain and how it dynamically and individually operates in its bio-social context. An access procedure to the collected data and bio-samples is in place and published on https://www.healthybrainstudy.nl/en/data-and-methods.

    https://www.trialregister.nl/trial/7955

    Additional information

    supplementary material
  • Heidlmayr, K., Ferragne, E., & Isel, F. (2021). Neuroplasticity in the phonological system: The PMN and the N400 as markers for the perception of non-native phonemic contrasts by late second language learners. Neuropsychologia, 156: 107831. doi:10.1016/j.neuropsychologia.2021.107831.

    Abstract

    Second language (L2) learners frequently encounter persistent difficulty in perceiving certain non-native sound contrasts, i.e., a phenomenon called “phonological deafness”. However, if extensive L2 experience leads to neuroplastic changes in the phonological system, then the capacity to discriminate non-native phonemic contrasts should progressively improve. Such perceptual changes should be attested by modifications at the neurophysiological level. We designed an EEG experiment in which the listeners’ perceptual capacities to discriminate second language phonemic contrasts influence the processing of lexical-semantic violations. Semantic congruency of critical words in a sentence context was driven by a phonemic contrast that was unique to the L2, English (e.g.,/ɪ/-/i:/, ship – sheep). Twenty-eight young adult native speakers of French with intermediate proficiency in English listened to sentences that contained either a semantically congruent or incongruent critical word (e.g., The anchor of the ship/*sheep was let down) while EEG was recorded. Three ERP effects were found to relate to increasing L2 proficiency: (1) a left frontal auditory N100 effect, (2) a smaller fronto-central phonological mismatch negativity (PMN) effect and (3) a semantic N400 effect. No effect of proficiency was found on oscillatory markers. The current findings suggest that neuronal plasticity in the human brain allows for the late acquisition of even hard-wired linguistic features such as the discrimination of phonemic contrasts in a second language. This is the first time that behavioral and neurophysiological evidence for the critical role of neural plasticity underlying L2 phonological processing and its interdependence with semantic processing has been provided. Our data strongly support the idea that pieces of information from different levels of linguistic processing (e.g., phonological, semantic) strongly interact and influence each other during online language processing.

    Additional information

    supplementary material
  • Henry, M. J., Cook, P. F., de Reus, K., Nityananda, V., Rouse, A. A., & Kotz, S. A. (2021). An ecological approach to measuring synchronization abilities across the animal kingdom. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200336. doi:10.1098/rstb.2020.0336.

    Abstract

    In this perspective paper, we focus on the study of synchronization abilities across the animal kingdom. We propose an ecological approach to studying nonhuman animal synchronization that begins from observations about when, how and why an animal might synchronize spontaneously with natural environmental rhythms. We discuss what we consider to be the most important, but thus far largely understudied, temporal, physical, perceptual and motivational constraints that must be taken into account when designing experiments to test synchronization in nonhuman animals. First and foremost, different species are likely to be sensitive to and therefore capable of synchronizing at different timescales. We also argue that it is fruitful to consider the latent flexibility of animal synchronization. Finally, we discuss the importance of an animal's motivational state for showcasing synchronization abilities. We demonstrate that the likelihood that an animal can successfully synchronize with an environmental rhythm is context-dependent and suggest that the list of species capable of synchronization is likely to grow when tested with ecologically honest, species-tuned experiments.
  • Hersh, T., King, B., & Lutton, B. V. (2014). Novel bioinformatics tools for analysis of gene expression in the skate, Leucoraja erinacea. The Bulletin, MDI Biological Laboratory, 53, 16-18.
  • Hersh, T. A., Gero, S., Rendell, L., & Whitehead, H. (2021). Using identity calls to detect structure in acoustic datasets. Methods in Ecology and Evolution, 12(9), 1668-1678. doi:10.1111/2041-210X.13644.

    Abstract

    Acoustic analyses can be powerful tools for illuminating structure within and between populations, especially for cryptic or difficult to access taxa. Acoustic repertoires are often compared using aggregate similarity measures across all calls of a particular type, but specific group identity calls may more clearly delineate structure in some taxa.
    2. We present a new method—the identity call method—that estimates the number of acoustically distinct subdivisions in a set of repertoires and identifies call types that characterize those subdivisions. The method uses contaminated mixture models to identify call types, assigning each call a probability of belonging to each type. Repertoires are hierarchically clustered based on similarities in call type usage, producing a dendrogram with ‘identity clades’ of repertoires and the
    ‘identity calls’ that best characterize each clade. We validated this approach using acoustic data from sperm whales, grey-breasted wood-wrens and Australian field crickets, and ran a suite of tests to assess parameter sensitivity.
    3. For all taxa, the method detected diagnostic signals (identity calls) and structure (identity clades; sperm whale subpopulations, wren subspecies and cricket species) that were consistent with past research. Some datasets were more sensitive to parameter variation than others, which may reflect real uncertainty or biological variability in the taxa examined. We recommend that users perform comparative analyses of different parameter combinations to determine which portions of the dendrogram warrant careful versus confident interpretation.
    4. The presence of group-characteristic identity calls does not necessarily mean animals perceive them as such. Fine-scale experiments like playbacks are a key next step to understand call perception and function. This method can help inform such studies by identifying calls that may be salient to animals and are good candidates for investigation or playback stimuli. For cryptic or difficult to access taxa with group-specific calls, the identity call method can aid managers in quantifying behavioural diversity and/or identifying putative structure within and between
    populations, given that acoustic data can be inexpensive and minimally invasive to collect.
  • Hervais-Adelman, A., Pefkou, M., & Golestani, N. (2014). Bilingual speech-in-noise: Neural bases of semantic context use in the native language. Brain and Language, 132, 1-6. doi:10.1016/j.bandl.2014.01.009.

    Abstract

    Bilingual listeners comprehend speech-in-noise better in their native than non-native language. This native-language benefit is thought to arise from greater use of top-down linguistic information to assist degraded speech comprehension. Using functional magnetic resonance imaging, we recently showed that left angular gyrus activation is modulated when semantic context is used to assist native language speech-in-noise comprehension (Golestani, Hervais-Adelman, Obleser, & Scott, 2013). Here, we extend the previous work, by reanalyzing the previous data alongside the results obtained in the non-native language of the same late bilingual participants. We found a behavioral benefit of semantic context in processing speech-in-noise in the native language only, and the imaging results also revealed a native language context effect in the left angular gyrus. We also find a complementary role of lower-level auditory regions during stimulus-driven processing. Our findings help to elucidate the neural basis of the established native language behavioral benefit of speech-in-noise processing. (C) 2014 Elsevier Inc. All rights reserved.
  • Hervais-Adelman, A., Davis, M. H., Johnsrude, I. S., Taylor, K. J., & Carlyon, R. P. (2011). Generalization of Perceptual Learning of Vocoded Speech. Journal of Experimental Psychology: Human Perception and Performance, 37(1), 283-295. doi:10.1037/a0020772.

    Abstract

    Recent work demonstrates that learning to understand noise-vocoded (NV) speech alters sublexical perceptual processes but is enhanced by the simultaneous provision of higher-level, phonological, but not lexical content (Hervais-Adelman, Davis, Johnsrude, & Carlyon, 2008), consistent with top-down learning (Davis, Johnsrude, Hervais-Adelman, Taylor, & McGettigan, 2005; Hervais-Adelman et al., 2008). Here, we investigate whether training listeners with specific types of NV speech improves intelligibility of vocoded speech with different acoustic characteristics. Transfer of perceptual learning would provide evidence for abstraction from variable properties of the speech input. In Experiment 1, we demonstrate that learning of NV speech in one frequency region generalizes to an untrained frequency region. In Experiment 2, we assessed generalization among three carrier signals used to create NV speech: noise bands, pulse trains, and sine waves. Stimuli created using these three carriers possess the same slow, time-varying amplitude information and are equated for naive intelligibility but differ in their temporal fine structure. Perceptual learning generalized partially, but not completely, among different carrier signals. These results delimit the functional and neural locus of perceptual learning of vocoded speech. Generalization across frequency regions suggests that learning occurs at a stage of processing at which some abstraction from the physical signal has occurred, while incomplete transfer across carriers indicates that learning occurs at a stage of processing that is sensitive to acoustic features critical for speech perception (e.g., noise, periodicity).
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2011). Executive control of language in the bilingual brain: Integrating the evidence from neuroinnaging to neuropsychology. Frontiers in Psychology, 2: 234. doi:10.3389/fpsyg.2011.00234.

    Abstract

    In this review we will focus on delineating the neural substrates of the executive control of language in the bilingual brain, based on the existing neuroimaging, intracranial, transcranial magnetic stimulation, and neuropsychological evidence. We will also offer insights from ongoing brain-imaging studies into the development of expertise in multilingual language control. We will concentrate specifically on evidence regarding how the brain selects and controls languages for comprehension and production. This question has been addressed in a number of ways and using various tasks, including language switching during production or perception, translation, and interpretation. We will attempt to synthesize existing evidence in order to bring to light the neural substrates that are crucial to executive control of language.
  • Hessels, R. S., Hooge, I., Snijders, T. M., & Kemner, C. (2014). Is there a limit to the superiority of individuals with ASD in visual search? Journal of Autism and Developmental Disorders, 44, 443-451. doi:10.1007/s10803-013-1886-8.

    Abstract

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an explanation, formulated in terms of load theory. We suggest that there is a limit to the superiority in visual search for individuals with ASD, related to the perceptual load of the stimuli. When perceptual load becomes so high that no additional task-(ir)relevant information can be processed, performance will be based on single stimulus identification, in which no differences between individuals with ASD and controls have been demonstrated
  • Heyselaar, E., Peeters, D., & Hagoort, P. (2021). Do we predict upcoming speech content in naturalistic environments? Language, Cognition and Neuroscience, 36(4), 440-461. doi:10.1080/23273798.2020.1859568.

    Abstract

    The ability to predict upcoming actions is a hallmark of cognition. It remains unclear, however, whether the predictive behaviour observed in controlled lab environments generalises to rich, everyday settings. In four virtual reality experiments, we tested whether a well-established marker of linguistic prediction (anticipatory eye movements) replicated when increasing the naturalness of the paradigm by means of immersing participants in naturalistic scenes (Experiment 1), increasing the number of distractor objects (Experiment 2), modifying the proportion of predictable noun-referents (Experiment 3), and manipulating the location of referents relative to the joint attentional space (Experiment 4). Robust anticipatory eye movements were observed for Experiments 1–3. The anticipatory effect disappeared, however, in Experiment 4. Our findings suggest that predictive processing occurs in everyday communication if the referents are situated in the joint attentional space. Methodologically, our study confirms that ecological validity and experimental control may go hand-in-hand in the study of human predictive behaviour.
  • Hill, C. (2011). Named and unnamed spaces: Color, kin and the environment in Umpila. The Senses & Society, 6(1), 57-67. doi:10.2752/174589311X12893982233759.

    Abstract

    Imagine describing the particular characteristics of the hue of a flower, or the quality of its scent, or the texture of its petal. Introspection suggests the expression of such sensory experiences in words is something quite different than the task of naming artifacts. The particular challenges in the linguistic encoding of sensorial experiences pose questions regarding how languages manage semantic gaps and “ineffability.” That is, what strategies do speakers have available to manage phenomena or domains of experience that are inexpressible or difficult to express in their language? This article considers this issue with regard to color in Umpila, an Aboriginal Australian language of the Paman family. The investigation of color naming and ineffability in Umpila reveals rich associations and mappings between color and visual perceptual qualities more generally, categorization of the human social world, and the environment. “Gaps” in the color system are filled or supported by associations with two of the most linguistically and culturally salient domains for Umpila - kinship and the environment
  • Hoedemaker, R. S., & Gordon, P. C. (2014). Embodied language comprehension: Encoding-based and goal-driven processes. Journal of Experimental Psychology: General, 143(2), 914-929. doi:10.1037/a0032348.

    Abstract

    Theories of embodied language comprehension have proposed that language is understood through perceptual simulation of the sensorimotor characteristics of its meaning. Strong support for this claim requires demonstration of encoding-based activation of sensorimotor representations that is distinct from task-related or goal-driven processes. Participants in 3 eye-tracking experiments were presented with triplets of either numbers or object and animal names. In Experiment 1, participants indicated whether the size of the referent of the middle object or animal name was in between the size of the 2 outer items. In Experiment 2, the object and animal names were encoded for an immediate recognition memory task. In Experiment 3, participants completed the same comparison task of Experiment 1 for both words and numbers. During the comparison tasks, word and number decision times showed a symbolic distance effect, such that response time was inversely related to the size difference between the items. A symbolic distance effect was also observed for animal and object encoding times in cases where encoding time likely reflected some goal-driven processes as well. When semantic size was irrelevant to the task (Experiment 2), it had no effect on word encoding times. Number encoding times showed a numerical distance priming effect: Encoding time increased with numerical difference between items. Together these results suggest that while activation of numerical magnitude representations is encoding-based as well as goal-driven, activation of size information associated with words is goal-driven and does not occur automatically during encoding. This conclusion challenges strong theories of embodied cognition which claim that language comprehension consists of activation of analog sensorimotor representations irrespective of higher level processes related to context or task-specific goals
  • Hoedemaker, R. S., & Gordon, P. C. (2014). It takes time to prime: Semantic priming in the ocular lexical decision task. Journal of Experimental Psychology: Human Perception and Performance, 40(6), 2179-2197. doi:10.1037/a0037677.

    Abstract

    Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (τ), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT.
  • Hoeksema, N., Verga, L., Mengede, J., Van Roessel, C., Villanueva, S., Salazar-Casals, A., Rubio-Garcia, A., Curcic-Blake, B., Vernes, S. C., & Ravignani, A. (2021). Neuroanatomy of the grey seal brain: Bringing pinnipeds into the neurobiological study of vocal learning. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200252. doi:10.1098/rstb.2020.0252.

    Abstract

    Comparative studies of vocal learning and vocal non-learning animals can increase our understanding of the neurobiology and evolution of vocal learning and human speech. Mammalian vocal learning is understudied: most research has either focused on vocal learning in songbirds or its absence in non-human primates. Here we focus on a highly promising model species for the neurobiology of vocal learning: grey seals. We provide a neuroanatomical atlas (based on dissected brain slices and magnetic resonance images), a labelled MRI template, a 3D model with volumetric measurements of brain regions, and histological cortical stainings. Four main features of the grey seal brain stand out. (1) It is relatively big and highly convoluted. (2) It hosts a relatively large temporal lobe and cerebellum, structures which could support developed timing abilities and acoustic processing. (3) The cortex is similar to humans in thickness and shows the expected six-layered mammalian structure. (4) Expression of FoxP2 - a gene involved in vocal learning and spoken language - is present in deeper layers of the cortex. Our results could facilitate future studies targeting the neural and genetic underpinnings of mammalian vocal learning, thus bridging the research gap from songbirds to humans and non-human primates.Competing Interest StatementThe authors have declared no competing interest.
  • Hoey, E. (2014). Sighing in interaction: Somatic, semiotic, and social. Research on Language and Social Interaction, 47(2), 175-200. doi:10.1080/08351813.2014.900229.

    Abstract

    Participants in interaction routinely orient to gaze, bodily comportment, and nonlexical vocalizations as salient for developing an analysis of the unfolding course of action. In this article, I address the respiratory phenomenon of sighing, the aim being to describe sighing as a situated practice that contributes to the achievement of particular actions in interaction. I report on the various actions sighs implement or construct and how their positioning and delivery informs participants’ understandings of their significance for interaction. Data are in American English
  • Hoey, E., Hömke, P., Löfgren, E., Neumann, T., Schuerman, W. L., & Kendrick, K. H. (2021). Using expletive insertion to pursue and sanction in interaction. Journal of Sociolinguistics, 25(1), 3-25. doi:10.1111/josl.12439.

    Abstract

    This article uses conversation analysis to examine constructions like who the fuck is that—sequence‐initiating actions into which an expletive like the fuck has been inserted. We describe how this turn‐constructional practice fits into and constitutes a recurrent sequence of escalating actions. In this sequence, it is used to pursue an adequate response after an inadequate one was given, and sanction the recipient for that inadequate response. Our analysis contributes to sociolinguistic studies of swearing by offering an account of swearing as a resource for social action.
  • Hogan-Brown, A. L., Hoedemaker, R. S., Gordon, P. C., & Losh, M. (2014). Eye-voice span during rapid automatized naming: Evidence of reduced automaticity in individuals with autism spectrum disorder and their siblings. Journal of Neurodevelopmental Disorders, 6(1): 33. doi:10.1186/1866-1955-6-33.

    Abstract

    Background: Individuals with autism spectrum disorder (ASD) and their parents demonstrate impaired performance in rapid automatized naming (RAN), a task that recruits a variety of linguistic and executive processes. Though the basic processes that contribute to RAN differences remain unclear, eye-voice relationships, as measured through eye tracking, can provide insight into cognitive and perceptual processes contributing to RAN performance. For example, in RAN, eye-voice span (EVS), the distance ahead the eyes are when articulation of a target item's label begins, is an indirect measure of automaticity of the processes underlying RAN. The primary objective of this study was to investigate automaticity in naming processes, as indexed by EVS during RAN. The secondary objective was to characterize RAN difficulties in individuals with ASD and their siblings. Methods: Participants (aged 15 – 33 years) included 21 individuals with ASD, 23 siblings of individuals with ASD, and 24 control subjects, group-matched on chronological age. Naming time, frequency of errors, and EVS were measured during a RAN task and compared across groups. Results: A stepwise pattern of RAN performance was observed, with individuals with ASD demonstrating the slowest naming across all RAN conditions, controls demonstrating the fastest naming, and siblings demonstrating intermediate performance. Individuals with ASD exhibited smaller EVSs than controls on all RAN conditions, and siblings exhibited smaller EVSs during number naming (the most highly automatized type of naming). EVSs were correlated with naming times in controls only, and only in the more automatized conditions. Conclusions: These results suggest that reduced automaticity in the component processes of RAN may underpin differences in individuals with ASD and their siblings. These findings also provide further support that RAN abilities are impacted by genetic liability to ASD. This study has important implications for understanding the underlying skills contributing to language-related deficits in ASD.
  • Holler, J., & Wilkin, K. (2011). Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Behavior, 35, 133-153. doi:10.1007/s10919-011-0105-6.

    Abstract

    Mimicry has been observed regarding a range of nonverbal behaviors, but only recently have researchers started to investigate mimicry in co-speech gestures. These gestures are considered to be crucially different from other aspects of nonverbal behavior due to their tight link with speech. This study provides evidence of mimicry in co-speech gestures in face-to-face dialogue, the most common forum of everyday talk. In addition, it offers an analysis of the functions that mimicked co-speech gestures fulfill in the collaborative process of creating a mutually shared understanding of referring expressions. The implications bear on theories of gesture production, research on grounding, and the mechanisms underlying behavioral mimicry.
  • Holler, J., & Wilkin, K. (2011). An experimental investigation of how addressee feedback affects co-speech gestures accompanying speakers’ responses. Journal of Pragmatics, 43, 3522-3536. doi:10.1016/j.pragma.2011.08.002.

    Abstract

    There is evidence that co-speech gestures communicate information to addressees and that they are often communicatively intended. However, we still know comparatively little about the role of gestures in the actual process of communication. The present study offers a systematic investigation of speakers’ gesture use before and after addressee feedback. The findings show that when speakers responded to addressees’ feedback gesture rate remained constant when this feedback encouraged clarification, elaboration or correction. However, speakers gestured proportionally less often after feedback when providing confirmatory responses. That is, speakers may not be drawing on gesture in response to addressee feedback per se, but particularly with responses that enhance addressees’ understanding. Further, the large majority of speakers’ gestures changed in their form. They tended to be more precise, larger, or more visually prominent after feedback. Some changes in gesture viewpoint were also observed. In addition, we found that speakers used deixis in speech and gaze to increase the salience of gestures occurring in response to feedback. Speakers appear to conceive of gesture as a useful modality in redesigning utterances to make them more accessible to addressees. The findings further our understanding of recipient design and co-speech gestures in face-to-face dialogue.
    Highlights

    ► Gesture rate remains constant in response to addressee feedback when the response aims to correct or clarify understanding. ► But gesture rate decreases when speakers provide confirmatory responses to feedback signalling correct understanding. ► Gestures are more communicative in response to addressee feedback, particularly in terms of precision, size and visual prominence. ► Speakers make gestures in response to addressee feedback more salient by using deictic markers in speech and gaze.
  • Holler, J., & Beattie, G. (2003). How iconic gestures and speech interact in the representation of meaning: are both aspects really integral to the process? Semiotica, 146, 81-116.
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.
  • Holler, J., & Beattie, G. (2003). Pragmatic aspects of representational gestures: Do speakers use them to clarify verbal ambiguity for the listener? Gesture, 3, 127-154.
  • Holler, J. (2011). Verhaltenskoordination, Mimikry und sprachbegleitende Gestik in der Interaktion. Psychotherapie - Wissenschaft: Special issue: "Sieh mal, wer da spricht" - der Koerper in der Psychotherapie Teil IV, 1(1), 56-64. Retrieved from http://www.psychotherapie-wissenschaft.info/index.php/psy-wis/article/view/13/65.
  • Holler, J., Alday, P. M., Decuyper, C., Geiger, M., Kendrick, K. H., & Meyer, A. S. (2021). Competition reduces response times in multiparty conversation. Frontiers in Psychology, 12: 693124. doi:10.3389/fpsyg.2021.693124.

    Abstract

    Natural conversations are characterized by short transition times between turns. This holds in particular for multi-party conversations. The short turn transitions in everyday conversations contrast sharply with the much longer speech onset latencies observed in laboratory studies where speakers respond to spoken utterances. There are many factors that facilitate speech production in conversational compared to laboratory settings. Here we highlight one of them, the impact of competition for turns. In multi-party conversations, speakers often compete for turns. In quantitative corpus analyses of multi-party conversation, the fastest response determines the recorded turn transition time. In contrast, in dyadic conversations such competition for turns is much less likely to arise, and in laboratory experiments with individual participants it does not arise at all. Therefore, all responses tend to be recorded. Thus, competition for turns may reduce the recorded mean turn transition times in multi-party conversations for a simple statistical reason: slow responses are not included in the means. We report two studies illustrating this point. We first report the results of simulations showing how much the response times in a laboratory experiment would be reduced if, for each trial, instead of recording all responses, only the fastest responses of several participants responding independently on the trial were recorded. We then present results from a quantitative corpus analysis comparing turn transition times in dyadic and triadic conversations. There was no significant group size effect in question-response transition times, where the present speaker often selects the next one, thus reducing competition between speakers. But, as predicted, triads showed shorter turn transition times than dyads for the remaining turn transitions, where competition for the floor was more likely to arise. Together, these data show that turn transition times in conversation should be interpreted in the context of group size, turn transition type, and social setting.
  • Holman, E. W., Brown, C. H., Wichmann, S., Müller, A., Velupillai, V., Hammarström, H., Sauppe, S., Jung, H., Bakker, D., Brown, P., Belyaev, O., Urban, M., Mailhammer, R., List, J.-M., & Egorov, D. (2011). Automated dating of the world’s language families based on lexical similarity. Current Anthropology, 52(6), 841-875. doi:10.1086/662127.

    Abstract

    This paper describes a computerized alternative to glottochronology for estimating elapsed time since parent languages diverged into daughter languages. The method, developed by the Automated Similarity Judgment Program (ASJP) consortium, is different from glottochronology in four major respects: (1) it is automated and thus is more objective, (2) it applies a uniform analytical approach to a single database of worldwide languages, (3) it is based on lexical similarity as determined from Levenshtein (edit) distances rather than on cognate percentages, and (4) it provides a formula for date calculation that mathematically recognizes the lexical heterogeneity of individual languages, including parent languages just before their breakup into daughter languages. Automated judgments of lexical similarity for groups of related languages are calibrated with historical, epigraphic, and archaeological divergence dates for 52 language groups. The discrepancies between estimated and calibration dates are found to be on average 29% as large as the estimated dates themselves, a figure that does not differ significantly among language families. As a resource for further research that may require dates of known level of accuracy, we offer a list of ASJP time depths for nearly all the world’s recognized language families and for many subfamilies.

    Files private

    Request files
  • Hoogman, M., Guadalupe, T., Zwiers, M. P., Klarenbeek, P., Francks, C., & Fisher, S. E. (2014). Assessing the effects of common variation in the FOXP2 gene on human brain structure. Frontiers in Human Neuroscience, 8: 473. doi:10.3389/fnhum.2014.00473.

    Abstract

    The FOXP2 transcription factor is one of the most well-known genes to have been implicated in developmental speech and language disorders. Rare mutations disrupting the function of this gene have been described in different families and cases. In a large three-generation family carrying a missense mutation, neuroimaging studies revealed significant effects on brain structure and function, most notably in the inferior frontal gyrus, caudate nucleus and cerebellum. After the identification of rare disruptive FOXP2 variants impacting on brain structure, several reports proposed that common variants at this locus may also have detectable effects on the brain, extending beyond disorder into normal phenotypic variation. These neuroimaging genetics studies used groups of between 14 and 96 participants. The current study assessed effects of common FOXP2 variants on neuroanatomy using voxel-based morphometry and volumetric techniques in a sample of >1300 people from the general population. In a first targeted stage we analyzed single nucleotide polymorphisms (SNPs) claimed to have effects in prior smaller studies (rs2253478, rs12533005, rs2396753, rs6980093, rs7784315, rs17137124, rs10230558, rs7782412, rs1456031), beginning with regions proposed in the relevant papers, then assessing impact across the entire brain. In the second gene-wide stage, we tested all common FOXP2 variation, focusing on volumetry of those regions most strongly implicated from analyses of rare disruptive mutations. Despite using a sample that is more than ten times that used for prior studies of common FOXP2 variation, we found no evidence for effects of SNPs on variability in neuroanatomy in the general population. Thus, the impact of this gene on brain structure may be largely limited to extreme cases of rare disruptive alleles. Alternatively, effects of common variants at this gene exist but are too subtle to be detected with standard volumetric techniques
  • Hoogman, M., Aarts, E., Zwiers, M., Slaats-Willemse, D., Naber, M., Onnink, M., Cools, R., Kan, C., Buitelaar, J., & Franke, B. (2011). Nitric Oxide Synthase genotype modulation of impulsivity and ventral striatal activity in adult ADHD patients and healthy comparison subjects. American Journal of Psychiatry, 168, 1099-1106. doi:10.1176/appi.ajp.2011.10101446.

    Abstract

    Objective: Attention deficit hyperactivity disorder (ADHD) is a highly heritable disorder. The NOS1 gene encoding nitric oxide synthase is a candidate gene for ADHD and has been previously linked with impulsivity. In the present study, the authors investigated the effect of a functional variable number of tandem repeats (VNTR) polymorphism in NOS1 (NOS1 exon 1f-VNTR) on the processing of rewards, one of the cognitive deficits in ADHD. Method: A sample of 136 participants, consisting of 87 adult ADHD patients and 49 healthy comparison subjects, completed a reward-related impulsivity task. A total of 104 participants also underwent functional magnetic resonance imaging during a reward anticipation task. The effect of the NOS1 exon 1f-VNTR genotype on reward-related impulsivity and reward-related ventral striatal activity was examined. Results: ADHD patients had higher impulsivity scores and lower ventral striatal activity than healthy comparison subjects. The association between the short allele and increased impulsivity was confirmed. However, independent of disease status, homozygous carriers of the short allele of NOS1, the ADHD risk genotype, demonstrated higher ventral striatal activity than carriers of the other NOS1 VNTR genotypes. Conclusions: The authors suggest that the NOS1 genotype influences impulsivity and its relation with ADHD is mediated through effects on this behavioral trait. Increased ventral striatal activity related to NOS1 may be compensatory for effects in other brain regions.
  • Horan Skilton, A., & Peeters, D. (2021). Cross-linguistic differences in demonstrative systems: Comparing spatial and non-spatial influences on demonstrative use in Ticuna and Dutch. Journal of Pragmatics, 180, 248-265. doi:10.1016/j.pragma.2021.05.001.

    Abstract

    In all spoken languages, speakers use demonstratives – words like this and that – to refer to entities in their immediate environment. But which factors determine whether they use one demonstrative (this) or another (that)? Here we report the results of an experiment examining the effects of referent visibility, referent distance, and addressee location on the production of demonstratives by speakers of Ticuna (isolate; Brazil, Colombia, Peru), an Amazonian language with four demonstratives, and speakers of Dutch (Indo-European; Netherlands, Belgium), which has two demonstratives. We found that Ticuna speakers’ use of demonstratives displayed effects of addressee location and referent distance, but not referent visibility. By contrast, under comparable conditions, Dutch speakers displayed sensitivity only to referent distance. Interestingly, we also observed that Ticuna speakers consistently used demonstratives in all referential utterances in our experimental paradigm, while Dutch speakers strongly preferred to use definite articles. Taken together, these findings shed light on the significant diversity found in demonstrative systems across languages. Additionally, they invite researchers studying exophoric demonstratives to broaden their horizons by cross-linguistically investigating the factors involved in speakers’ choice of demonstratives over other types of referring expressions, especially articles.
  • Hörpel, S. G., Baier, L., Peremans, H., Reijniers, J., Wiegrebe, L., & Firzlaff, U. (2021). Communication breakdown: Limits of spectro-temporal resolution for the perception of bat communication calls. Scientific Reports, 11: 13708. doi:10.1038/s41598-021-92842-4.

    Abstract

    During vocal communication, the spectro‑temporal structure of vocalizations conveys important
    contextual information. Bats excel in the use of sounds for echolocation by meticulous encoding of
    signals in the temporal domain. We therefore hypothesized that for social communication as well,
    bats would excel at detecting minute distortions in the spectro‑temporal structure of calls. To test
    this hypothesis, we systematically introduced spectro‑temporal distortion to communication calls of
    Phyllostomus discolor bats. We broke down each call into windows of the same length and randomized
    the phase spectrum inside each window. The overall degree of spectro‑temporal distortion in
    communication calls increased with window length. Modelling the bat auditory periphery revealed
    that cochlear mechanisms allow discrimination of fast spectro‑temporal envelopes. We evaluated
    model predictions with experimental psychophysical and neurophysiological data. We first assessed
    bats’ performance in discriminating original versions of calls from increasingly distorted versions of
    the same calls. We further examined cortical responses to determine additional specializations for
    call discrimination at the cortical level. Psychophysical and cortical responses concurred with model
    predictions, revealing discrimination thresholds in the range of 8–15 ms randomization‑window
    length. Our data suggest that specialized cortical areas are not necessary to impart psychophysical
    resilience to temporal distortion in communication calls.

    Additional information

    supplementary information
  • Hoymann, G. (2014). [Review of the book Bridging the language gap, Approaches to Herero verbal interaction as development practice in Namibia by Rose Marie Beck]. Journal of African languages and linguistics, 35(1), 130-133. doi:10.1515/jall-2014-0004.
  • Hribar, A., Haun, D. B. M., & Call, J. (2011). Great apes’ strategies to map spatial relations. Animal Cognition, 14, 511-523. doi:10.1007/s10071-011-0385-6.

    Abstract

    We investigated reasoning about spatial relational similarity in three great ape species: chimpanzees, bonobos, and orangutans. Apes were presented with three spatial mapping tasks in which they were required to find a reward in an array of three cups, after observing a reward being hidden in a different array of three cups. To obtain a food reward, apes needed to choose the cup that was in the same relative position (i.e., on the left) as the baited cup in the other array. The three tasks differed in the constellation of the two arrays. In Experiment 1, the arrays were placed next to each other, forming a line. In Experiment 2, the positioning of the two arrays varied each trial, being placed either one behind the other in two rows, or next to each other, forming a line. Finally, in Experiment 3, the two arrays were always positioned one behind the other in two rows, but misaligned. Results suggested that apes compared the two arrays and recognized that they were similar in some way. However, we believe that instead of mapping the left–left, middle–middle, and right–right cups from each array, they mapped the cups that shared the most similar relations to nearby landmarks (table’s visual boundaries).
  • Huettig, F., & McQueen, J. M. (2011). The nature of the visual environment induces implicit biases during language-mediated visual search. Memory & Cognition, 39, 1068-1084. doi:10.3758/s13421-011-0086-z.

    Abstract

    Four eye-tracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed-word displays and used during language-mediated visual search. Participants listened to sentences containing target words which were similar semantically or in shape to concepts invoked by concurrently-displayed printed words. In Experiment 1 the displays contained semantic and shape competitors of the targets, and two unrelated words. There were significant shifts in eye gaze as targets were heard towards semantic but not shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eye-tracking task. In all cases there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed-word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases towards particular modes of processing during language-mediated visual search.
  • Huettig, F., Rommers, J., & Meyer, A. S. (2011). Using the visual world paradigm to study language processing: A review and critical evaluation. Acta Psychologica, 137, 151-171. doi:10.1016/j.actpsy.2010.11.003.

    Abstract

    We describe the key features of the visual world paradigm and review the main research areas where it has been used. In our discussion we highlight that the paradigm provides information about the way language users integrate linguistic information with information derived from the visual environment. Therefore the paradigm is well suited to study one of the key issues of current cognitive psychology, namely the interplay between linguistic and visual information processing. However, conclusions about linguistic processing (e.g., about activation, competition, and timing of access of linguistic representations) in the absence of relevant visual information must be drawn with caution.
  • Huettig, F., & Mishra, R. K. (2014). How literacy acquisition affects the illiterate mind - A critical examination of theories and evidence. Language and Linguistics Compass, 8(10), 401-427. doi:10.1111/lnc3.12092.

    Abstract

    At present, more than one-fifth of humanity is unable to read and write. We critically examine experimental evidence and theories of how (il)literacy affects the human mind. In our discussion we show that literacy has significant cognitive consequences that go beyond the processing of written words and sentences. Thus, cultural inventions such as reading shape general cognitive processing in non-trivial ways. We suggest that this has important implications for educational policy and guidance as well as research into cognitive processing and brain functioning.
  • Huettig, F., & Altmann, G. (2011). Looking at anything that is green when hearing ‘frog’: How object surface colour and stored object colour knowledge influence language-mediated overt attention. Quarterly Journal of Experimental Psychology, 64(1), 122-145. doi:10.1080/17470218.2010.481474.

    Abstract

    Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.
  • Huettig, F., Olivers, C. N. L., & Hartsuiker, R. J. (2011). Looking, language, and memory: Bridging research from the visual world and visual search paradigms. Acta Psychologica, 137, 138-150. doi:10.1016/j.actpsy.2010.07.013.

    Abstract

    In the visual world paradigm as used in psycholinguistics, eye gaze (i.e. visual orienting) is measured in order to draw conclusions about linguistic processing. However, current theories are underspecified with respect to how visual attention is guided on the basis of linguistic representations. In the visual search paradigm as used within the area of visual attention research, investigators have become more and more interested in how visual orienting is affected by higher order representations, such as those involved in memory and language. Within this area more specific models of orienting on the basis of visual information exist, but they need to be extended with mechanisms that allow for language-mediated orienting. In the present paper we review the evidence from these two different – but highly related – research areas. We arrive at a model in which working memory serves as the nexus in which long-term visual as well as linguistic representations (i.e. types) are bound to specific locations (i.e. tokens or indices). The model predicts that the interaction between language and visual attention is subject to a number of conditions, such as the presence of the guiding representation in working memory, capacity limitations, and cognitive control mechanisms.
  • Huettig, F., Singh, N., & Mishra, R. K. (2011). Language-mediated visual orienting behavior in low and high literates. Frontiers in Psychology, 2: e285. doi:10.3389/fpsyg.2011.00285.

    Abstract

    The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task (cf. Huettig & Altmann, 2005) which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., 'magar', crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., 'matar', peas; a semantic competitor, e.g., 'kachuwa', turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze towards phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were impossible (Experiment 2) but in contrast to high literates these phonologically-mediated shifts in eye gaze were not closely time-locked to the speech input. We conclude that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word-object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts.
  • Huisman, J. L. A., van Hout, R., & Majid, A. (2021). Patterns of semantic variation differ across body parts: evidence from the Japonic languages. Cognitive Linguistics, 32, 455-486. doi:10.1515/cog-2020-0079.

    Abstract

    The human body is central to myriad metaphors, so studying the conceptualisation of the body itself is critical if we are to understand its broader use. One essential but understudied issue is whether languages differ in which body parts they single out for naming. This paper takes a multi-method approach to investigate body part nomenclature within a single language family. Using both a naming task (Study 1) and colouring-in task (Study 2) to collect data from six Japonic languages, we found that lexical similarity for body part terminology was notably differentiated within Japonic, and similar variation was evident in semantics too. Novel application of cluster analysis on naming data revealed a relatively flat hierarchical structure for parts of the face, whereas parts of the body were organised with deeper hierarchical structure. The colouring data revealed that bounded parts show more stability across languages than unbounded parts. Overall, the data reveal there is not a single universal conceptualisation of the body as is often assumed, and that in-depth, multi-method explorations of under-studied languages are urgently required.
  • Huizeling, E., Wang, H., Holland, C., & Kessler, K. (2021). Changes in theta and alpha oscillatory signatures of attentional control in older and middle age. European Journal of Neuroscience, 54(1), 4314-4337. doi:10.1111/ejn.15259.

    Abstract

    Recent behavioural research has reported age-related changes in the costs of refocusing attention from a temporal (rapid serial visual presentation) to a spatial (visual search) task. Using magnetoencephalography, we have now compared the neural signatures of attention refocusing between three age groups (19–30, 40–49 and 60+ years) and found differences in task-related modulation and cortical localisation of alpha and theta oscillations. Efficient, faster refocusing in the youngest group compared to both middle age and older groups was reflected in parietal theta effects that were significantly reduced in the older groups. Residual parietal theta activity in older individuals was beneficial to attentional refocusing and could reflect preserved attention mechanisms. Slowed refocusing of attention, especially when a target required consolidation, in the older and middle-aged adults was accompanied by a posterior theta deficit and increased recruitment of frontal (middle-aged and older groups) and temporal (older group only) areas, demonstrating a posterior to anterior processing shift. Theta but not alpha modulation correlated with task performance, suggesting that older adults' stronger and more widely distributed alpha power modulation could reflect decreased neural precision or dedifferentiation but requires further investigation. Our results demonstrate that older adults present with different alpha and theta oscillatory signatures during attentional control, reflecting cognitive decline and, potentially, also different cognitive strategies in an attempt to compensate for decline.

    Additional information

    supplementary material
  • Hulten, A., Karvonen, L., Laine, M., & Salmelin, R. (2014). Producing speech with a newly learned morphosyntax and vocabulary: An MEG study. Journal of Cognitive Neuroscience, 26(8), 1721-1735. doi:10.1162/jocn_a_00558.
  • Humphries, S., Holler*, J., Crawford, T., & Poliakoff*, E. (2021). Cospeech gestures are a window into the effects of Parkinson’s disease on action representations. Journal of Experimental Psychology: General, 150(8), 1581-1597. doi:10.1037/xge0001002.

    Abstract

    -* indicates joint senior authors - Parkinson’s disease impairs motor function and cognition, which together affect language and
    communication. Co-speech gestures are a form of language-related actions that provide imagistic
    depictions of the speech content they accompany. Gestures rely on visual and motor imagery, but
    it is unknown whether gesture representations require the involvement of intact neural sensory
    and motor systems. We tested this hypothesis with a fine-grained analysis of co-speech action
    gestures in Parkinson’s disease. 37 people with Parkinson’s disease and 33 controls described
    two scenes featuring actions which varied in their inherent degree of bodily motion. In addition
    to the perspective of action gestures (gestural viewpoint/first- vs. third-person perspective), we
    analysed how Parkinson’s patients represent manner (how something/someone moves) and path
    information (where something/someone moves to) in gesture, depending on the degree of bodily
    motion involved in the action depicted. We replicated an earlier finding that people with
    Parkinson’s disease are less likely to gesture about actions from a first-person perspective – preferring instead to depict actions gesturally from a third-person perspective – and show that
    this effect is modulated by the degree of bodily motion in the actions being depicted. When
    describing high motion actions, the Parkinson’s group were specifically impaired in depicting
    manner information in gesture and their use of third-person path-only gestures was significantly
    increased. Gestures about low motion actions were relatively spared. These results inform our
    understanding of the neural and cognitive basis of gesture production by providing
    neuropsychological evidence that action gesture production relies on intact motor network
    function.

    Additional information

    Open data and code
  • Hustá, C., Zheng, X., Papoutsi, C., & Piai, V. (2021). Electrophysiological signatures of conceptual and lexical retrieval from semantic memory. Neuropsychologia, 161: 107988. doi:10.1016/j.neuropsychologia.2021.107988.

    Abstract

    Retrieval from semantic memory of conceptual and lexical information is essential for producing speech. It is unclear whether there are differences in the neural mechanisms of conceptual and lexical retrieval when spreading activation through semantic memory is initiated by verbal or nonverbal settings. The same twenty participants took part in two EEG experiments. The first experiment examined conceptual and lexical retrieval following nonverbal settings, whereas the second experiment was a replication of previous studies examining conceptual and lexical retrieval following verbal settings. Target pictures were presented after constraining and nonconstraining contexts. In the nonverbal settings, contexts were provided as two priming pictures (e.g., constraining: nest, feather; nonconstraining: anchor, lipstick; target picture: BIRD). In the verbal settings, contexts were provided as sentences (e.g., constraining: “The farmer milked a...”; nonconstraining: “The child drew a...”; target picture: COW). Target pictures were named faster following constraining contexts in both experiments, indicating that conceptual preparation starts before target picture onset in constraining conditions. In the verbal experiment, we replicated the alpha-beta power decreases in constraining relative to nonconstraining conditions before target picture onset. No such power decreases were found in the nonverbal experiment. Power decreases in constraining relative to nonconstraining conditions were significantly different between experiments. Our findings suggest that participants engage in conceptual preparation following verbal and nonverbal settings, albeit differently. The retrieval of a target word, initiated by verbal settings, is associated with alpha-beta power decreases. By contrast, broad conceptual preparation alone, prompted by nonverbal settings, does not seem enough to elicit alpha-beta power decreases. These findings have implications for theories of oscillations and semantic memory.

    Additional information

    1-s2.0-S0028393221002414-mmc1.pdf
  • Ille, S., Ohlerth, A.-K., Colle, D., Colle, H., Dragoy, O., Goodden, J., Robe, P., Rofes, A., Mandonnet, E., Robert, E., Satoer, D., Viegas, C., Visch-Brink, E., van Zandvoort, M., & Krieg, S. (2021). Augmented reality for the virtual dissection of white matter pathways. Acta Neurochirurgica, (4), 895-903. doi:10.1007/s00701-019-04159-x.

    Abstract

    Background The human white matter pathway network is complex and of critical importance for functionality. Thus, learning
    and understanding white matter tract anatomy is important for the training of neuroscientists and neurosurgeons. The study aims
    to test and evaluate a new method for fiber dissection using augmented reality (AR) in a group which is experienced in cadaver
    white matter dissection courses and in vivo tractography.
    Methods Fifteen neurosurgeons, neurolinguists, and neuroscientists participated in this questionnaire-based study. We presented
    five cases of patients with left-sided perisylvian gliomas who underwent awake craniotomy. Diffusion tensor imaging fiber
    tracking (DTI FT) was performed and the language-related networks were visualized separated in different tracts by color.
    Participants were able to virtually dissect the prepared DTI FTs using a spatial computer and AR goggles. The application
    was evaluated through a questionnaire with answers from 0 (minimum) to 10 (maximum).
    Results Participants rated the overall experience of AR fiber dissection with a median of 8 points (mean ± standard deviation 8.5 ± 1.4).
    Usefulness for fiber dissection courses and education in general was rated with 8 (8.3 ± 1.4) and 8 (8.1 ± 1.5) points, respectively.
    Educational value was expected to be high for several target audiences (student: median 9, 8.6 ± 1.4; resident: 9, 8.5 ± 1.8; surgeon: 9,
    8.2 ± 2.4; scientist: 8.5, 8.0 ± 2.4). Even clinical application of AR fiber dissection was expected to be of value with a median of 7
    points (7.0 ± 2.5)
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Indefrey, P. (2011). The spatial and temporal signatures of word production components: a critical update. Frontiers in Psychology, 2(255): 255. doi:10.3389/fpsyg.2011.00255.

    Abstract

    In the first decade of neurocognitive word production research the predominant approach was brain mapping, i.e., investigating the regional cerebral brain activation patterns correlated with word production tasks, such as picture naming and word generation. Indefrey and Levelt (2004) conducted a comprehensive meta-analysis of word production studies that used this approach and combined the resulting spatial information on neural correlates of component processes of word production with information on the time course of word production provided by behavioral and electromagnetic studies. In recent years, neurocognitive word production research has seen a major change toward a hypothesis-testing approach. This approach is characterized by the design of experimental variables modulating single component processes of word production and testing for predicted effects on spatial or temporal neurocognitive signatures of these components. This change was accompanied by the development of a broader spectrum of measurement and analysis techniques. The article reviews the findings of recent studies using the new approach. The time course assumptions of Indefrey and Levelt (2004) have largely been confirmed requiring only minor adaptations. Adaptations of the brain structure/function relationships proposed by Indefrey and Leven (2004) include the precise role of subregions of the left inferior frontal gyrus as well as a probable, yet to date unclear role of the inferior parietal cortex in word production.
  • Indefrey, P. (2014). Time course of word production does not support a parallel input architecture. Language, Cognition and Neuroscience, 29(1), 33-34. doi:10.1080/01690965.2013.847191.

    Abstract

    Hickok's enterprise to unify psycholinguistic and motor control models is highly stimulating. Nonetheless, there are problems of the model with respect to the time course of neural activation in word production, the flexibility for continuous speech, and the need for non-motor feedback.

    Files private

    Request files
  • Ingason, A., Rujescu, D., Cichon, S., Sigurdsson, E., Sigmundsson, T., Pietilainen, O. P. H., Buizer-Voskamp, J. E., Strengman, E., Francks, C., Muglia, P., Gylfason, A., Gustafsson, O., Olason, P. I., Steinberg, S., Hansen, T., Jakobsen, K. D., Rasmussen, H. B., Giegling, I., Möller, H.-J., Hartmann, A. and 28 moreIngason, A., Rujescu, D., Cichon, S., Sigurdsson, E., Sigmundsson, T., Pietilainen, O. P. H., Buizer-Voskamp, J. E., Strengman, E., Francks, C., Muglia, P., Gylfason, A., Gustafsson, O., Olason, P. I., Steinberg, S., Hansen, T., Jakobsen, K. D., Rasmussen, H. B., Giegling, I., Möller, H.-J., Hartmann, A., Crombie, C., Fraser, G., Walker, N., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Bramon, E., Kiemeney, L. A., Franke, B., Murray, R., Vassos, E., Toulopoulou, T., Mühleisen, T. W., Tosato, S., Ruggeri, M., Djurovic, S., Andreassen, O. A., Zhang, Z., Werge, T., Ophoff, R. A., Rietschel, M., Nöthen, M. M., Petursson, H., Stefansson, H., Peltonen, L., Collier, D., Stefansson, K., & St Clair, D. M. (2011). Copy number variations of chromosome 16p13.1 region associated with schizophrenia. Molecular Psychiatry, 16, 17-25. doi:10.1038/mp.2009.101.

    Abstract

    Deletions and reciprocal duplications of the chromosome 16p13.1 region have recently been reported in several cases of autism and mental retardation (MR). As genomic copy number variants found in these two disorders may also associate with schizophrenia, we examined 4345 schizophrenia patients and 35 079 controls from 8 European populations for duplications and deletions at the 16p13.1 locus, using microarray data. We found a threefold excess of duplications and deletions in schizophrenia cases compared with controls, with duplications present in 0.30% of cases versus 0.09% of controls (P=0.007) and deletions in 0.12 % of cases and 0.04% of controls (P>0.05). The region can be divided into three intervals defined by flanking low copy repeats. Duplications spanning intervals I and II showed the most significant (P=0.00010) association with schizophrenia. The age of onset in duplication and deletion carriers among cases ranged from 12 to 35 years, and the majority were males with a family history of psychiatric disorders. In a single Icelandic family, a duplication spanning intervals I and II was present in two cases of schizophrenia, and individual cases of alcoholism, attention deficit hyperactivity disorder and dyslexia. Candidate genes in the region include NTAN1 and NDE1. We conclude that duplications and perhaps also deletions of chromosome 16p13.1, previously reported to be associated with autism and MR, also confer risk of schizophrenia.
  • Yu, X., Janse, E., & Schoonen, R. (2021). The effect of learning context on L2 listening development. Studies in Second Language Acquisition, 43(2), 329-354. doi:10.1017/S0272263120000534.

    Abstract

    Little research has been done on the effect of learning context on L2 listening development. Motivated by DeKeyser’s (2015) skill acquisition theory of second language acquisition, this study compares L2 listening development in study abroad (SA) and at home (AH) contexts from both language knowledge and processing perspectives. One hundred forty-nine Chinese postgraduates studying in either China or the United Kingdom participated in a battery of listening tasks at the beginning and at the end of an academic year. These tasks measure auditory vocabulary knowledge and listening processing efficiency (i.e., accuracy, speed, and stability of processing) in word recognition, grammatical processing, and semantic analysis. Results show that, provided equal starting levels, the SA learners made more progress than the AH learners in speed of processing across the language processing tasks, with less clear results for vocabulary acquisition. Studying abroad may be an effective intervention for L2 learning, especially in terms of processing speed.
  • Yu, X., Janse, E., & Schoonen, R. (2021). The effect of learning context on L2 listening development: Knowledge and processing. Studies in Second Language Acquisition, 43, 329-354. doi:10.1017/S0272263120000534.

    Abstract

    Little research has been done on the effect of learning context on L2 listening development. Motivated by DeKeyser’s (2015) skill acquisition theory of second language acquisition, this study compares L2 listening development in study abroad (SA) and at home (AH) contexts from both language knowledge and processing perspectives. One hundred forty-nine Chinese postgraduates studying in either China or the United Kingdom participated in a battery of listening tasks at the beginning and at the end of an academic year. These tasks measure auditory vocabulary knowledge and listening processing efficiency (i.e., accuracy, speed, and stability of processing) in word recognition, grammatical processing, and semantic analysis. Results show that, provided equal starting levels, the SA learners made more progress than the AH learners in speed of processing across the language processing tasks, with less clear results for vocabulary acquisition. Studying abroad may be an effective intervention for L2 learning, especially in terms of processing speed.
  • Janse, E., & Ernestus, M. (2011). The roles of bottom-up and top-down information in the recognition of reduced speech: Evidence from listeners with normal and impaired hearing. Journal of Phonetics, 39(3), 330-343. doi:10.1016/j.wocn.2011.03.005.
  • Janse, E., & Andringa, S. J. (2021). The roles of cognitive abilities and hearing acuity in older adults’ recognition of words taken from fast and spectrally reduced speech. Applied Psycholinguistics, 42(3), 763-790. doi:10.1017/S0142716421000047.

    Abstract

    Previous literature has identified several cognitive abilities as predictors of individual differences in speech perception. Working memory was chief among them, but effects have also been found for processing speed. Most research has been conducted on speech in noise, but fast and unclear articulation also makes listening challenging, particularly for older listeners. As a first step toward specifying the cognitive mechanisms underlying spoken word recognition, we set up this study to determine which factors explain unique variation in word identification accuracy in fast speech, and the extent to which this was affected by further degradation of the speech signal. To that end, 105 older adults were tested on identification accuracy of fast words in unaltered and degraded conditions in which the speech stimuli were low-pass filtered. They were also tested on processing speed, memory, vocabulary knowledge, and hearing sensitivity. A structural equation analysis showed that only memory and hearing sensitivity explained unique variance in word recognition in both listening conditions. Working memory was more strongly associated with performance in the unfiltered than in the filtered condition. These results suggest that memory skills, rather than speed, facilitate the mapping of single words onto stored lexical representations, particularly in conditions of medium difficulty.
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Janse, E., & Jesse, A. (2014). Working memory affects older adults’ use of context in spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 1842-1862. doi:10.1080/17470218.2013.879391.

    Abstract

    Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate, however, older listeners’ ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether working memory predicts older adults’ ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) mainly affected the speed of recognition, with only a marginal effect on detection accuracy. Contextual facilitation was modulated by older listeners’ working memory and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners’ immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.

    Files private

    Request files
  • Jansen, N. A., Braden, R. O., Srivastava, S., Otness, E. F., Lesca, G., Rossi, M., Nizon, M., Bernier, R. A., Quelin, C., Van Haeringen, A., Kleefstra, T., Wong, M. M. K., Whalen, S., Fisher, S. E., Morgan, A. T., & Van Bon, B. W. (2021). Clinical delineation of SETBP1 haploinsufficiency disorder. European Journal of Human Genetics, 29, 1198 -1205. doi:10.1038/s41431-021-00888-9.

    Abstract

    SETBP1 haploinsufficiency disorder (MIM#616078) is caused by haploinsufficiency of SETBP1 on chromosome 18q12.3, but there has not yet been any systematic evaluation of the major features of this monogenic syndrome, assessing penetrance and expressivity. We describe the first comprehensive study to delineate the associated clinical phenotype, with findings from 34 individuals, including 24 novel cases, all of whom have a SETBP1 loss-of-function variant or single (coding) gene deletion, confirmed by molecular diagnostics. The most commonly reported clinical features included mild motor developmental delay, speech impairment, intellectual disability, hypotonia, vision impairment, attention/concentration deficits, and hyperactivity. Although there is a mild overlap in certain facial features, the disorder does not lead to a distinctive recognizable facial gestalt. As well as providing insight into the clinical spectrum of SETBP1 haploinsufficiency disorder, this reports puts forward care recommendations for patient management.

    Additional information

    supplementary table
  • Janssen, J., Díaz-Caneja, C. M., Alloza, C., Schippers, A., De Hoyos, L., Santonja, J., Gordaliza, P. M., Buimer, E. E. L., van Haren, N. E. M., Cahn, W., Arango, C., Kahn, R. S., Hulshoff Pol, H. E., & Schnack, H. G. (2021). Dissimilarity in sulcal width patterns in the cortex can be used to identify patients with schizophrenia with extreme deficits in cognitive performance. Schizophrenia Bulletin, 47(2), 552-561. doi:10.1093/schbul/sbaa131.

    Abstract

    Schizophrenia is a biologically complex disorder with multiple regional deficits in cortical brain morphology. In addition, interindividual heterogeneity of cortical morphological metrics is larger in patients with schizophrenia when compared to healthy controls. Exploiting interindividual differences in the severity of cortical morphological deficits in patients instead of focusing on group averages may aid in detecting biologically informed homogeneous subgroups. The person-based similarity index (PBSI) of brain morphology indexes an individual’s morphometric similarity across numerous cortical regions amongst a sample of healthy subjects. We extended the PBSI such that it indexes the morphometric similarity of an independent individual (eg, a patient) with respect to healthy control subjects. By employing a normative modeling approach on longitudinal data, we determined an individual’s degree of morphometric dissimilarity to the norm. We calculated the PBSI for sulcal width (PBSI-SW) in patients with schizophrenia and healthy control subjects (164 patients and 164 healthy controls; 656 magnetic resonance imaging scans) and associated it with cognitive performance and cortical sulcation index. A subgroup of patients with markedly deviant PBSI-SW showed extreme deficits in cognitive performance and cortical sulcation. Progressive reduction of PBSI-SW in the schizophrenia group relative to healthy controls was driven by these deviating individuals. By explicitly leveraging interindividual differences in the severity of PBSI-SW deficits, neuroimaging-driven subgrouping of patients is feasible. As such, our results pave the way for future applications of morphometric similarity indices for subtyping of clinical populations.

    Files private

    Request files
  • Jara-Ettinger, J., & Rubio-Fernández, P. (2021). Quantitative mental state attributions in language understanding. Science Advances, 7: eabj0970. doi:10.1126/sciadv.abj0970.

    Abstract

    Human social intelligence relies on our ability to infer other people’s mental states such as their beliefs, desires,and intentions. While people are proficient at mental state inference from physical action, it is unknown whether people can make inferences of comparable granularity from simple linguistic events. Here, we show that people can make quantitative mental state attributions from simple referential expressions, replicating the fine-grained inferential structure characteristic of nonlinguistic theory of mind. Moreover, people quantitatively adjust these inferences after brief exposures to speaker-specific speech patterns. These judgments matched the predictions made by our computational model of theory of mind in language, but could not be explained by a simpler qualitative model that attributes mental states deductively. Our findings show how the connection between language and theory of mind runs deep, with their interaction showing in one of the most fundamental forms of human communication: reference.

    Additional information

    https://osf.io/h8qfy/
  • Jeltema, H., Ohlerth, A.-K., de Wit, A., Wagemakers, M., Rofes, A., Bastiaanse, R., & Drost, G. (2021). Comparing navigated transcranial magnetic stimulation mapping and "gold standard" direct cortical stimulation mapping in neurosurgery: a systematic review. Neurosurgical Review, (4), 1903-1920. doi:10.1007/s10143-020-01397-x.

    Abstract

    The objective of this systematic review is to create an overview of the literature on the comparison of navigated transcranial magnetic stimulation (nTMS) as a mapping tool to the current gold standard, which is (intraoperative) direct cortical stimulation (DCS) mapping. A search in the databases of PubMed, EMBASE, and Web of Science was performed. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and recommendations were used. Thirty-five publications were included in the review, describing a total of 552 patients. All studies concerned either mapping of motor or language function. No comparative data for nTMS and DCS for other neurological functions were found. For motor mapping, the distances between the cortical representation of the different muscle groups identified by nTMS and DCS varied between 2 and 16 mm. Regarding mapping of language function, solely an object naming task was performed in the comparative studies on nTMS and DCS. Sensitivity and specificity ranged from 10 to 100% and 13.3–98%, respectively, when nTMS language mapping was compared with DCS mapping. The positive predictive value (PPV) and negative predictive value (NPV) ranged from 17 to 75% and 57–100% respectively. The available evidence for nTMS as a mapping modality for motor and language function is discussed.
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Jesse, A., & McQueen, J. M. (2014). Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 793-808. doi:10.1080/17470218.2013.834371.

    Abstract

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.
  • Jesse, A., & McQueen, J. M. (2011). Positional effects in the lexical retuning of speech perception. Psychonomic Bulletin & Review, 18, 943-950. doi:10.3758/s13423-011-0129-2.

    Abstract

    Listeners use lexical knowledge to adjust to speakers’ idiosyncratic pronunciations. Dutch listeners learn to interpret an ambiguous sound between /s/ and /f/ as /f/ if they hear it word-finally in Dutch words normally ending in /f/, but as /s/ if they hear it in normally /s/-final words. Here, we examined two positional effects in lexically guided retuning. In Experiment 1, ambiguous sounds during exposure always appeared in word-initial position (replacing the first sounds of /f/- or /s/-initial words). No retuning was found. In Experiment 2, the same ambiguous sounds always appeared word-finally during exposure. Here, retuning was found. Lexically guided perceptual learning thus appears to emerge reliably only when lexical knowledge is available as the to-be-tuned segment is initially being processed. Under these conditions, however, lexically guided retuning was position independent: It generalized across syllabic positions. Lexical retuning can thus benefit future recognition of particular sounds wherever they appear in words.
  • Johnson, E., McQueen, J. M., & Huettig, F. (2011). Toddlers’ language-mediated visual search: They need not have the words for it. The Quarterly Journal of Experimental Psychology, 64, 1672-1682. doi:10.1080/17470218.2011.594165.

    Abstract

    Eye movements made by listeners during language-mediated visual search reveal a strong link between
    visual processing and conceptual processing. For example, upon hearing the word for a missing referent
    with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g.,
    a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these
    shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children
    who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in
    common exhibit language-mediated eye movements like those made by older children and adults?
    That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-montholds
    lacking colour term knowledge nonetheless recognized the perceptual–conceptual commonality
    between named and seen objects. This indicates that language-mediated visual search need not
    depend on stored labels for concepts.
  • Johnson, E. K., & Huettig, F. (2011). Eye movements during language-mediated visual search reveal a strong link between overt visual attention and lexical processing in 36-months-olds. Psychological Research, 75, 35-42. doi:10.1007/s00426-010-0285-4.

    Abstract

    The nature of children’s early lexical processing was investigated by asking what information 36-month-olds access and use when instructed to find a known but absent referent. Children readily retrieved stored knowledge about characteristic color, i.e. when asked to find an object with a typical color (e.g. strawberry), children tended to fixate more upon an object that had the same (e.g. red plane) as opposed to a different (e.g. yellow plane) color. They did so regardless of the fact that they have had plenty of time to recognize the pictures for what they are, i.e. planes not strawberries. These data represent the first demonstration that language-mediated shifts of overt attention in young children can be driven by individual stored visual attributes of known words that mismatch on most other dimensions. The finding suggests that lexical processing and overt attention are strongly linked from an early age.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • Johnson, J. S., Sutterer, D. W., Acheson, D. J., Lewis-Peacock, J. A., & Postle, B. R. (2011). Increased alpha-band power during the retention of shapes and shape-location associations in visual short-term memory. Frontiers in Psychology, 2(128), 1-9. doi:10.3389/fpsyg.2011.00128.

    Abstract

    Studies exploring the role of neural oscillations in cognition have revealed sustained increases in alpha-band (∼8–14 Hz) power during the delay period of delayed-recognition short-term memory tasks. These increases have been proposed to reflect the inhibition, for example, of cortical areas representing task-irrelevant information, or of potentially interfering representations from previous trials. Another possibility, however, is that elevated delay-period alpha-band power (DPABP) reflects the selection and maintenance of information, rather than, or in addition to, the inhibition of task-irrelevant information. In the present study, we explored these possibilities using a delayed-recognition paradigm in which the presence and task relevance of shape information was systematically manipulated across trial blocks and electroencephalographic was used to measure alpha-band power. In the first trial block, participants remembered locations marked by identical black circles. The second block featured the same instructions, but locations were marked by unique shapes. The third block featured the same stimulus presentation as the second, but with pretrial instructions indicating, on a trial-by-trial basis, whether memory for shape or location was required, the other dimension being irrelevant. In the final block, participants remembered the unique pairing of shape and location for each stimulus. Results revealed minimal DPABP in each of the location-memory conditions, whether locations were marked with identical circles or with unique task-irrelevant shapes. In contrast, alpha-band power increases were observed in both the shape-memory condition, in which location was task irrelevant, and in the critical final condition, in which both shape and location were task relevant. These results provide support for the proposal that alpha-band oscillations reflect the retention of shape information and/or shape–location associations in short-term memory.
  • Johnson, E. K., Westrek, E., Nazzi, T., & Cutler, A. (2011). Infant ability to tell voices apart rests on language experience. Developmental Science, 14(5), 1002-1011. doi:10.1111/j.1467-7687.2011.01052.x.

    Abstract

    A visual fixation study tested whether seven-month-olds can discriminate between different talkers. The infants were first habituated to talkers producing sentences in either a familiar or unfamiliar language, then heard test sentences from previously unheard speakers, either in the language used for habituation, or in another language. When the language at test mismatched that in habituation, infants always noticed the change. When language remained constant and only talker altered, however, infants detected the change only if the language was the native tongue. Adult listeners with a different native tongue than the infants did not reproduce the discriminability patterns shown by the infants, and infants detected neither voice nor language changes in reversed speech; both these results argue against explanation of the native-language voice discrimination in terms of acoustic properties of the stimuli. The ability to identify talkers is, like many other perceptual abilities, strongly influenced by early life experience.
  • Jones, C. R., Pickles, A., Falcaro, M., Marsden, A. J., Happé, F., Scott, S. K., Sauter, D., Tregay, J., Phillips, R. J., Baird, G., Simonoff, E., & Charman, T. (2011). A multimodal approach to emotion recognition ability in autism spectrum disorders. Journal of Child Psychology and Psychiatry, 52(3), 275-285. doi:10.1111/j.1469-7610.2010.02328.x.

    Abstract

    Background: Autism spectrum disorders (ASD) are characterised by social and communication difficulties in day-to-day life, including problems in recognising emotions. However, experimental investigations of emotion recognition ability in ASD have been equivocal; hampered by small sample sizes, narrow IQ range and over-focus on the visual modality. Methods: We tested 99 adolescents (mean age 15;6 years, mean IQ 85) with an ASD and 57 adolescents without an ASD (mean age 15;6 years, mean IQ 88) on a facial emotion recognition task and two vocal emotion recognition tasks (one verbal; one non-verbal). Recognition of happiness, sadness, fear, anger, surprise and disgust were tested. Using structural equation modelling, we conceptualised emotion recognition ability as a multimodal construct, measured by the three tasks. We examined how the mean levels of recognition of the six emotions differed by group (ASD vs. non-ASD) and IQ (>= 80 vs. < 80). Results: There was no significant difference between groups for the majority of emotions and analysis of error patterns suggested that the ASD group were vulnerable to the same pattern of confusions between emotions as the non-ASD group. However, recognition ability was significantly impaired in the ASD group for surprise. IQ had a strong and significant effect on performance for the recognition of all six emotions, with higher IQ adolescents outperforming lower IQ adolescents. Conclusions: The findings do not suggest a fundamental difficulty with the recognition of basic emotions in adolescents with ASD.
  • Jones, G., Cabiddu, F., Andrews, M., & Rowland, C. F. (2021). Chunks of phonological knowledge play a significant role in children’s word learning and explain effects of neighborhood size, phonotactic probability, word frequency and word length. Journal of Memory and Language, 119: 104232. doi:10.1016/j.jml.2021.104232.

    Abstract

    A key omission from many accounts of children’s early word learning is the linguistic knowledge that the child has acquired up to the point when learning occurs. We simulate this knowledge using a computational model that learns phoneme and word sequence knowledge from naturalistic language corpora. We show how this simple model is able to account for effects of word length, word frequency, neighborhood density and phonotactic probability on children’s early word learning. Moreover, we show how effects of neighborhood density and phonotactic probability on word learning are largely influenced by word length, with our model being able to capture all effects. We then use predictions from the model to show how the ease by which a child learns a new word from maternal input is directly influenced by the phonological knowledge that the child has acquired from other words up to the point of encountering the new word. There are major implications of this work: models and theories of early word learning need to incorporate existing sublexical and lexical knowledge in explaining developmental change while well-established indices of word learning are rejected in favor of phonological knowledge of varying grain sizes.

    Additional information

    supplementary data research data
  • Jongman, S. R., Khoe, Y. H., & Hintz, F. (2021). Vocabulary size influences spontaneous speech in native language users: Validating the use of automatic speech recognition in individual differences research. Language and Speech, 64(1), 35-51. doi:10.1177/0023830920911079.

    Abstract

    Previous research has shown that vocabulary size affects performance on laboratory word production tasks. Individuals who know many words show faster lexical access and retrieve more words belonging to pre-specified categories than individuals who know fewer words. The present study examined the relationship between receptive vocabulary size and speaking skills as assessed in a natural sentence production task. We asked whether measures derived from spontaneous responses to every-day questions correlate with the size of participants’ vocabulary. Moreover, we assessed the suitability of automatic speech recognition for the analysis of participants’ responses in complex language production data. We found that vocabulary size predicted indices of spontaneous speech: Individuals with a larger vocabulary produced more words and had a higher speech-silence ratio compared to individuals with a smaller vocabulary. Importantly, these relationships were reliably identified using manual and automated transcription methods. Taken together, our results suggest that spontaneous speech elicitation is a useful method to investigate natural language production and that automatic speech recognition can alleviate the burden of labor-intensive speech transcription.
  • Jordan, F. (2011). A phylogenetic analysis of the evolution of Austronesian sibling terminologies. Human Biology, 83, 297-321. doi:10.3378/027.083.0209.

    Abstract

    Social structure in human societies is underpinned by the variable expression of ideas about relatedness between different types of kin. We express these ideas through language in our kin terminology: to delineate who is kin and who is not, and to attach meanings to the types of kin labels associated with different individuals. Cross-culturally, there is a regular and restricted range of patterned variation in kin terminologies, and to date, our understanding of this diversity has been hampered by inadequate techniques for dealing with the hierarchical relatedness of languages (Galton’s Problem). Here I use maximum-likelihood and Bayesian phylogenetic comparative methods to begin to tease apart the processes underlying the evolution of kin terminologies in the Austronesian language family, focusing on terms for siblings. I infer (1) the probable ancestral states and (2) evolutionary models of change for the semantic distinctions of relative age (older/younger sibling) and relative sex (same sex/opposite-sex). Analyses show that early Austronesian languages contained the relative-age, but not the relative-sex distinction; the latter was reconstructed firmly only for the ancestor of Eastern Malayo-Polynesian languages. Both distinctions were best characterized by evolutionary models where the gains and losses of the semantic distinctions were equally likely. A multi-state model of change examined how the relative-sex distinction could be elaborated and found that some transitions in kin terms were not possible: jumps from absence to heavily elaborated were very unlikely, as was piece-wise dismantling of elaborate distinctions. Cultural ideas about what types of kin distinctions are important can be embedded in the semantics of language; using a phylogenetic evolutionary framework we can understand how those distinctions in meaning change through time.
  • Junge, C., & Cutler, A. (2014). Early word recognition and later language skills. Brain sciences, 4(4), 532-559. doi:10.3390/brainsci4040532.

    Abstract

    Recent behavioral and electrophysiological evidence has highlighted the long-term importance for language skills of an early ability to recognize words in continuous speech. We here present further tests of this long-term link in the form of follow-up studies conducted with two (separate) groups of infants who had earlier participated in speech segmentation tasks. Each study extends prior follow-up tests: Study 1 by using a novel follow-up measure that taps into online processing, Study 2 by assessing language performance relationships over a longer time span than previously tested. Results of Study 1 show that brain correlates of speech segmentation ability at 10 months are positively related to 16-month-olds’ target fixations in a looking-while-listening task. Results of Study 2 show that infant speech segmentation ability no longer directly predicts language profiles at the age of five. However, a meta-analysis across our results and those of similar studies (Study 3) reveals that age at follow-up does not moderate effect size. Together, the results suggest that infants’ ability to recognize words in speech certainly benefits early vocabulary development; further observed relationships of later language skills to early word recognition may be consequent upon this vocabulary size effect.
  • Junge, C., Cutler, A., & Hagoort, P. (2014). Successful word recognition by 10-month-olds given continuous speech both at initial exposure and test. Infancy, 19(2), 179-193. doi:10.1111/infa.12040.

    Abstract

    Most words that infants hear occur within fluent speech. To compile a vocabulary, infants therefore need to segment words from speech contexts. This study is the first to investigate whether infants (here: 10-month-olds) can recognize words when both initial exposure and test presentation are in continuous speech. Electrophysiological evidence attests that this indeed occurs: An increased extended negativity (word recognition effect) appears for familiarized target words relative to control words. This response proved constant at the individual level: Only infants who showed this negativity at test had shown such a response, within six repetitions after first occurrence, during familiarization.
  • Kapteijns, B., & Hintz, F. (2021). Comparing predictors of sentence self-paced reading times: Syntactic complexity versus transitional probability metrics. PLoS One, 16(7): e0254546. doi:10.1371/journal.pone.0254546.

    Abstract

    When estimating the influence of sentence complexity on reading, researchers typically opt for one of two main approaches: Measuring syntactic complexity (SC) or transitional probability (TP). Comparisons of the predictive power of both approaches have yielded mixed results. To address this inconsistency, we conducted a self-paced reading experiment. Participants read sentences of varying syntactic complexity. From two alternatives, we selected the set of SC and TP measures, respectively, that provided the best fit to the self-paced reading data. We then compared the contributions of the SC and TP measures to reading times when entered into the same model. Our results showed that both measures explained significant portions of variance in self-paced reading times. Thus, researchers aiming to measure sentence complexity should take both SC and TP into account. All of the analyses were conducted with and without control variables known to influence reading times (word/sentence length, word frequency and word position) to showcase how the effects of SC and TP change in the presence of the control variables.

    Additional information

    supporting information
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2021). Effects and non-effects of late language exposure on spatial language development: Evidence from deaf adults and children. Language Learning and Development, 17(1), 1-25. doi:10.1080/15475441.2020.1823846.

    Abstract

    Late exposure to the first language, as in the case of deaf children with hearing parents, hinders the production of linguistic expressions, even in adulthood. Less is known about the development of language soon after language exposure and if late exposure hinders all domains of language in children and adults. We compared late signing adults and children (MAge = 8;5) 2 years after exposure to sign language, to their age-matched native signing peers in expressions of two types of locative relations that are acquired in certain cognitive-developmental order: view-independent (IN-ON-UNDER) and view-dependent (LEFT-RIGHT). Late signing children and adults differed from native signers in their use of linguistic devices for view-dependent relations but not for view-independent relations. These effects were also modulated by the morphological complexity. Hindering effects of late language exposure on the development of language in children and adults are not absolute but are modulated by cognitive and linguistic complexity.
  • Keller, K. L., Fritz, R. S., Zoubek, C. M., Kennedy, E. H., Cronin, K. A., Rothwell, E. S., & Serfass, T. L. (2014). Effects of transport on fecal glucocorticoid levels in captive-bred cotton-top tamarins (Saguinus oedipus). Journal of the Pennsylvania Academy of Science, 88(2), 84-88.

    Abstract

    The relocation of animals can induce stress when animals are placed in novel environmental conditions. The movement of captive animals among facilities is common, especially for non-human primates used in research. The stress response begins with the activation of the hypothalamic-pituitary-adrenal (HPA) axis which results in the release of glucocorticoid hormones (GC), which at chronic levels could lead to deleterious physiological effects. There is a substantial body of data concerning GC levels affecting reproduction, and rank and aggression in primates. However, the effect of transport has received much less attention. Fecal samples from eight (four male and four female) captive-bred cotton-top tamarins (Saguinus oedipus) were collected at four different time points (two pre-transport and two post-transport). The fecal samples were analyzed using an immunoassay to determine GC levels. A repeated measures analysis of variance (ANOVA) demonstrated that GC levels differed among transport times (p = 0.009), but not between sexes (p = 0.963). Five of the eight tamarins exhibited an increase in GC levels after transport. Seven of the eight tamarins exhibited a decrease in GC levels from three to six days post-transport to three weeks post-transport. Most values returned to pre-transport levels after three weeks. The results indicate that these tamarins experienced elevated GC levels following transport, but these increases were of short duration. This outcome would suggest that the negative effects of elevated GC levels were also of short duration.
  • Kelly, S., Byrne, K., & Holler, J. (2011). Raising the stakes of communication: Evidence for increased gesture production as predicted by the GSA framework. Information, 2(4), 579-593. doi:10.3390/info2040579.

    Abstract

    Theorists of language have argued that co-­speech hand gestures are an
    intentional part of social communication. The present study provides evidence for these
    claims by showing that speakers adjust their gesture use according to their perceived relevance to the audience. Participants were asked to read about items that were and were not useful in a wilderness survival scenario, under the pretense that they would then
    explain (on camera) what they learned to one of two different audiences. For one audience (a group of college students in a dormitory orientation activity), the stakes of successful
    communication were low;; for the other audience (a group of students preparing for a
    rugged camping trip in the mountains), the stakes were high. In their explanations to the camera, participants in the high stakes condition produced three times as many
    representational gestures, and spent three times as much time gesturing, than participants in the low stakes condition. This study extends previous research by showing that the anticipated consequences of one’s communication—namely, the degree to which information may be useful to an intended recipient—influences speakers’ use of gesture.
  • Kelly, B., Wigglesworth, G., Nordlinger, R., & Blythe, J. (2014). The acquisition of polysynthetic languages. Language and Linguistics Compass, 8, 51-64. doi:10.1111/lnc3.12062.

    Abstract

    One of the major challenges in acquiring a language is being able to use morphology as an adult would, and thus, a considerable amount of acquisition research has focused on morphological production and comprehension. Most of this research, however, has focused on the acquisition of morphology in isolating languages, or languages (such as English) with limited inflectional morphology. The nature of the learning task is different, and potentially more challenging, when the child is learning a polysynthetic language – a language in which words are highly morphologically complex, expressing in a single word what in English takes a multi-word clause. To date, there has been no cross-linguistic survey of how children approach this puzzle and learn polysynthetic languages. This paper aims to provide such a survey, including a discussion of some of the general findings in the literature regarding the acquisition of polysynthetic systems
  • Kember, H., Choi, J., Yu, J., & Cutler, A. (2021). The processing of linguistic prominence. Language and Speech, 64(2), 413-436. doi:10.1177/0023830919880217.

    Abstract

    Prominence, the expression of informational weight within utterances, can be signaled by
    prosodic highlighting (head-prominence, as in English) or by position (as in Korean edge-prominence).
    Prominence confers processing advantages, even if conveyed only by discourse manipulations. Here
    we compared processing of prominence in English and Korean, using a task that indexes processing
    success, namely recognition memory. In each language, participants’ memory was tested for target
    words heard in sentences in which they were prominent due to prosody, position, both or neither.
    Prominence produced recall advantage, but the relative effects differed across language. For Korean
    listeners the positional advantage was greater, but for English listeners prosodic and syntactic
    prominence had equivalent and additive effects. In a further experiment semantic and phonological
    foils tested depth of processing of the recall targets. Both foil types were correctly rejected,
    suggesting that semantic processing had not reached the level at which word form was no longer
    available. Together the results suggest that prominence processing is primarily driven by universal
    effects of information structure; but language-specific differences in frequency of experience prompt
    different relative advantages of prominence signal types. Processing efficiency increases in each case,
    however, creating more accurate and more rapidly contactable memory representations.
  • Kemp, J. P., Sayers, A., Paternoster, L., Evans, D. M., Deere, K., St Pourcain, B., Timpson, N. J., Ring, S. M., Lorentzon, M., Lehtimäki, T., Eriksson, J., Kähönen, M., Raitakari, O., Laaksonen, M., Sievänen, H., Viikari, J., Lyytikäinen, L.-P., Smith, G. D., Fraser, W. D., Vandenput, L. and 2 moreKemp, J. P., Sayers, A., Paternoster, L., Evans, D. M., Deere, K., St Pourcain, B., Timpson, N. J., Ring, S. M., Lorentzon, M., Lehtimäki, T., Eriksson, J., Kähönen, M., Raitakari, O., Laaksonen, M., Sievänen, H., Viikari, J., Lyytikäinen, L.-P., Smith, G. D., Fraser, W. D., Vandenput, L., Ohlsson, C., & Tobias, J. H. (2014). Does Bone Resorption Stimulate Periosteal Expansion? A Cross-Sectional Analysis of β-C-telopeptides of Type I Collagen (CTX), Genetic Markers of the RANKL Pathway, and Periosteal Circumference as Measured by pQCT. Journal of Bone and Mineral Research, 29(4), 1015-1024. doi:10.1002/jbmr.2093.

    Abstract

    We hypothesized that bone resorption acts to increase bone strength through stimulation of periosteal expansion. Hence, we examined whether bone resorption, as reflected by serum β-C-telopeptides of type I collagen (CTX), is positively associated with periosteal circumference (PC), in contrast to inverse associations with parameters related to bone remodeling such as cortical bone mineral density (BMDC ). CTX and mid-tibial peripheral quantitative computed tomography (pQCT) scans were available in 1130 adolescents (mean age 15.5 years) from the Avon Longitudinal Study of Parents and Children (ALSPAC). Analyses were adjusted for age, gender, time of sampling, tanner stage, lean mass, fat mass, and height. CTX was positively related to PC (β=0.19 [0.13, 0.24]) (coefficient=SD change per SD increase in CTX, 95% confidence interval)] but inversely associated with BMDC (β=-0.46 [-0.52,-0.40]) and cortical thickness [β=-0.11 (-0.18, -0.03)]. CTX was positively related to bone strength as reflected by the strength-strain index (SSI) (β=0.09 [0.03, 0.14]). To examine the causal nature of this relationship, we then analyzed whether single-nucleotide polymorphisms (SNPs) within key osteoclast regulatory genes, known to reduce areal/cortical BMD, conversely increase PC. Fifteen such genetic variants within or proximal to genes encoding receptor activator of NF-κB (RANK), RANK ligand (RANKL), and osteoprotegerin (OPG) were identified by literature search. Six of the 15 alleles that were inversely related to BMD were positively related to CTX (p<}0.05 cut-off) (n=2379). Subsequently, we performed a meta-analysis of associations between these SNPs and PC in ALSPAC (n=3382), Gothenburg Osteoporosis and Obesity Determinants (GOOD) (n=938), and the Young Finns Study (YFS) (n=1558). Five of the 15 alleles that were inversely related to BMD were positively related to PC (p{<0.05 cut-off). We conclude that despite having lower BMD, individuals with a genetic predisposition to higher bone resorption have greater bone size, suggesting that higher bone resorption is permissive for greater periosteal expansion.
  • Kemp, J. P., Medina-Gomez, C., Estrada, K., St Pourcain, B., Heppe, D. H. M., Warrington, N. M., Oei, L., Ring, S. M., Kruithof, C. J., Timpson, N. J., Wolber, L. E., Reppe, S., Gautvik, K., Grundberg, E., Ge, B., van der Eerden, B., van de Peppel, J., Hibbs, M. A., Ackert-Bicknell, C. L., Choi, K. and 13 moreKemp, J. P., Medina-Gomez, C., Estrada, K., St Pourcain, B., Heppe, D. H. M., Warrington, N. M., Oei, L., Ring, S. M., Kruithof, C. J., Timpson, N. J., Wolber, L. E., Reppe, S., Gautvik, K., Grundberg, E., Ge, B., van der Eerden, B., van de Peppel, J., Hibbs, M. A., Ackert-Bicknell, C. L., Choi, K., Koller, D. L., Econs, M. J., Williams, F. M. K., Foroud, T., Zillikens, M. C., Ohlsson, C., Hofman, A., Uitterlinden, A. G., Davey Smith, G., Jaddoe, V. W. V., Tobias, J. H., Rivadeneira, F., & Evans, D. M. (2014). Phenotypic dissection of bone mineral density reveals skeletal site specificity and facilitates the identification of novel loci in the genetic regulation of bone mass attainment. PLoS Genetics, 10(6): e1004423. doi:10.1371/journal.pgen.1004423.

    Abstract

    Heritability of bone mineral density (BMD) varies across skeletal sites, reflecting different relative contributions of genetic and environmental influences. To quantify the degree to which common genetic variants tag and environmental factors influence BMD, at different sites, we estimated the genetic (rg) and residual (re) correlations between BMD measured at the upper limbs (UL-BMD), lower limbs (LL-BMD) and skull (SK-BMD), using total-body DXA scans of ∼ 4,890 participants recruited by the Avon Longitudinal Study of Parents and their Children (ALSPAC). Point estimates of rg indicated that appendicular sites have a greater proportion of shared genetic architecture (LL-/UL-BMD rg = 0.78) between them, than with the skull (UL-/SK-BMD rg = 0.58 and LL-/SK-BMD rg = 0.43). Likewise, the residual correlation between BMD at appendicular sites (r(e) = 0.55) was higher than the residual correlation between SK-BMD and BMD at appendicular sites (r(e) = 0.20-0.24). To explore the basis for the observed differences in rg and re, genome-wide association meta-analyses were performed (n ∼ 9,395), combining data from ALSPAC and the Generation R Study identifying 15 independent signals from 13 loci associated at genome-wide significant level across different skeletal regions. Results suggested that previously identified BMD-associated variants may exert site-specific effects (i.e. differ in the strength of their association and magnitude of effect across different skeletal sites). In particular, variants at CPED1 exerted a larger influence on SK-BMD and UL-BMD when compared to LL-BMD (P = 2.01 × 10(-37)), whilst variants at WNT16 influenced UL-BMD to a greater degree when compared to SK- and LL-BMD (P = 2.31 × 10(-14)). In addition, we report a novel association between RIN3 (previously associated with Paget's disease) and LL-BMD (rs754388: β = 0.13, SE = 0.02, P = 1.4 × 10(-10)). Our results suggest that BMD at different skeletal sites is under a mixture of shared and specific genetic and environmental influences. Allowing for these differences by performing genome-wide association at different skeletal sites may help uncover new genetic influences on BMD.
  • Kempen, G. (1998). Comparing and explaining the trajectories of first and second language acquisition: In search of the right mix of psychological and linguistic factors [Commentory]. Bilingualism: Language and Cognition, 1, 29-30. doi:10.1017/S1366728998000066.

    Abstract

    When you compare the behavior of two different age groups which are trying to master the same sensori-motor or cognitive skill, you are likely to discover varying learning routes: different stages, different intervals between stages, or even different orderings of stages. Such heterogeneous learning trajectories may be caused by at least six different types of factors: (1) Initial state: the kinds and levels of skills the learners have available at the onset of the learning episode. (2) Learning mechanisms: rule-based, inductive, connectionist, parameter setting, and so on. (3) Input and feedback characteristics: learning stimuli, information about success and failure. (4) Information processing mechanisms: capacity limitations, attentional biases, response preferences. (5) Energetic variables: motivation, emotional reactions. (6) Final state: the fine-structure of kinds and levels of subskills at the end of the learning episode. This applies to language acquisition as well. First and second language learners probably differ on all six factors. Nevertheless, the debate between advocates and opponents of the Fundamental Difference Hypothesis concerning L1 and L2 acquisition have looked almost exclusively at the first two factors. Those who believe that L1 learners have access to Universal Grammar whereas L2 learners rely on language processing strategies, postulate different learning mechanisms (UG parameter setting in L1, more general inductive strategies in L2 learning). Pienemann opposes this view and, based on his Processability Theory, argues that L1 and L2 learners start out from different initial states: they come to the grammar learning task with different structural hypotheses (SOV versus SVO as basic word order of German).
  • Kempen, G., & Kolk, H. (1980). Apentaal, een kwestie van intelligentie, niet van taalaanleg. Cahiers Biowetenschappen en Maatschappij, 6, 31-36.
  • Kempen, G., & Harbusch, K. (2003). An artificial opposition between grammaticality and frequency: Comment on Bornkessel, Schlesewsky & Friederici (2002). Cognition, 90(2), 205-210 [Rectification on p. 215]. doi:10.1016/S0010-0277(03)00145-8.

    Abstract

    In a recent Cognition paper (Cognition 85 (2002) B21), Bornkessel, Schlesewsky, and Friederici report ERP data that they claim “show that online processing difficulties induced by word order variations in German cannot be attributed to the relative infrequency of the constructions in question, but rather appear to reflect the application of grammatical principles during parsing” (p. B21). In this commentary we demonstrate that the posited contrast between grammatical principles and construction (in)frequency as sources of parsing problems is artificial because it is based on factually incorrect assumptions about the grammar of German and on inaccurate corpus frequency data concerning the German constructions involved.

Share this page