Publications

Displaying 301 - 400 of 995
  • Heyselaar, E., Mazaheri, A., Hagoort, P., & Segaert, K. (2018). Changes in alpha activity reveal that social opinion modulates attention allocation during face processing. NeuroImage, 174, 432-440. doi:10.1016/j.neuroimage.2018.03.034.

    Abstract

    Participants’ performance differs when conducting a task in the presence of a secondary individual, moreover the opinion the participant has of this individual also plays a role. Using EEG, we investigated how previous interactions with, and evaluations of, an avatar in virtual reality subsequently influenced attentional allocation to the face of that avatar. We focused on changes in the alpha activity as an index of attentional allocation. We found that the onset of an avatar’s face whom the participant had developed a rapport with induced greater alpha suppression. This suggests greater attentional resources are allocated to the interacted-with avatars. The evaluative ratings of the avatar induced a U-shaped change in alpha suppression, such that participants paid most attention when the avatar was rated as average. These results suggest that attentional allocation is an important element of how behaviour is altered in the presence of a secondary individual and is modulated by our opinion of that individual.

    Additional information

    mmc1.docx
  • Heyselaar, E., & Segaert, K. (2019). Memory encoding of syntactic information involves domain-general attentional resources. Evidence from dual-task studies. Quarterly Journal of Experimental Psychology, 72(6), 1285-1296. doi:10.1177/1747021818801249.

    Abstract

    We investigate the type of attention (domain-general or language-specific) used during
    syntactic processing. We focus on syntactic priming: In this task, participants listen to a
    sentence that describes a picture (prime sentence), followed by a picture the participants need
    to describe (target sentence). We measure the proportion of times participants use the
    syntactic structure they heard in the prime sentence to describe the current target sentence as a
    measure of syntactic processing. Participants simultaneously conducted a motion-object
    tracking (MOT) task, a task commonly used to tax domain-general attentional resources. We
    manipulated the number of objects the participant had to track; we thus measured
    participants’ ability to process syntax while their attention is not-, slightly-, or overly-taxed.
    Performance in the MOT task was significantly worse when conducted as a dual-task
    compared to as a single task. We observed an inverted U-shaped curve on priming magnitude
    when conducting the MOT task concurrently with prime sentences (i.e., memory encoding),
    but no effect when conducted with target sentences (i.e., memory retrieval). Our results
    illustrate how, during the encoding of syntactic information, domain-general attention
    differentially affects syntactic processing, whereas during the retrieval of syntactic
    information domain-general attention does not influence syntactic processing
  • Hickman, L. J., Keating, C. T., Ferrari, A., & Cook, J. L. (2022). Skin conductance as an index of alexithymic traits in the general population. Psychological Reports, 125(3), 1363-1379. doi:10.1177/00332941211005118.

    Abstract

    Alexithymia concerns a difficulty identifying and communicating one’s own emotions, and a tendency towards externally-oriented thinking. Recent work argues that such alexithymic traits are due to altered arousal response and poor subjective awareness of “objective” arousal responses. Although there are individual differences within the general population in identifying and describing emotions, extant research has focused on highly alexithymic individuals. Here we investigated whether mean arousal and concordance between subjective and objective arousal underpin individual differences in alexithymic traits in a general population sample. Participants rated subjective arousal responses to 60 images from the International Affective Picture System whilst their skin conductance was recorded. The Autism Quotient was employed to control for autistic traits in the general population. Analysis using linear models demonstrated that mean arousal significantly predicted Toronto Alexithymia Scale scores above and beyond autistic traits, but concordance scores did not. This indicates that, whilst objective arousal is a useful predictor in populations that are both above and below the cut-off values for alexithymia, concordance scores between objective and subjective arousal do not predict variation in alexithymic traits in the general population.
  • Hilverman, C., Clough, S., Duff, M. C., & Cook, S. W. (2018). Patients with hippocampal amnesia successfully integrate gesture and speech. Neuropsychologia, 117, 332-338. doi:10.1016/j.neuropsychologia.2018.06.012.

    Abstract

    During conversation, people integrate information from co-speech hand gestures with information in spoken language. For example, after hearing the sentence, "A piece of the log flew up and hit Carl in the face" while viewing a gesture directed at the nose, people tend to later report that the log hit Carl in the nose (information only in gesture) rather than in the face (information in speech). The cognitive and neural mechanisms that support the integration of gesture with speech are unclear. One possibility is that the hippocampus known for its role in relational memory and information integration is necessary for integrating gesture and speech. To test this possibility, we examined how patients with hippocampal amnesia and healthy and brain-damaged comparison participants express information from gesture in a narrative retelling task. Participants watched videos of an experimenter telling narratives that included hand gestures that contained supplementary information. Participants were asked to retell the narratives and their spoken retellings were assessed for the presence of information from gesture. For features that had been accompanied by supplementary gesture, patients with amnesia retold fewer of these features overall and fewer retellings that matched the speech from the narrative. Yet their retellings included features that contained information that had been present uniquely in. gesture in amounts that were not reliably different from comparison groups. Thus, a functioning hippocampus is not necessary for gesture-speech integration over short timescales. Providing unique information in gesture may enhance communication for individuals with declarative memory impairment, possibly via non-declarative memory mechanisms.
  • Hoedemaker, R. S., & Meyer, A. S. (2019). Planning and coordination of utterances in a joint naming task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(4), 732-752. doi:10.1037/xlm0000603.

    Abstract

    Dialogue requires speakers to coordinate. According to the model of dialogue as joint action, interlocutors achieve this coordination by corepresenting their own and each other’s task share in a functionally equivalent manner. In two experiments, we investigated this corepresentation account using an interactive joint naming task in which pairs of participants took turns naming sets of objects on a shared display. Speaker A named the first, or the first and third object, and Speaker B named the second object. In control conditions, Speaker A named one, two, or all three objects and Speaker B remained silent. We recorded the timing of the speakers’ utterances and Speaker A’s eye movements. Interturn pause durations indicated that the speakers effectively coordinated their utterances in time. Speaker A’s speech onset latencies depended on the number of objects they named, but were unaffected by Speaker B’s naming task. This suggests speakers were not fully incorporating their partner’s task into their own speech planning. Moreover, Speaker A’s eye movements indicated that they were much less likely to attend to objects their partner named than to objects they named themselves. When speakers did inspect their partner’s objects, viewing times were too short to suggest that speakers were retrieving these object names as if they were planning to name the objects themselves. These results indicate that speakers prioritized planning their own responses over attending to their interlocutor’s task and suggest that effective coordination can be achieved without full corepresentation of the partner’s task.
  • Hoey, E. (2018). How speakers continue with talk after a lapse in conversation. Research on Language and Social Interaction, 51(3), 329-346. doi:10.1080/08351813.2018.1485234.

    Abstract

    How do conversational participants continue with turn-by-turn talk after a momentary lapse? If all participants forgo the option to speak at possible sequence completion, an extended silence may emerge that can indicate a lack of anything to talk about next. For the interaction to proceed recognizably as a conversation, the postlapse turn needs to implicate more talk. Using conversation analysis, I examine three practical alternatives regarding sequentially implicative postlapse turns: Participants may move to end the interaction, continue with some prior matter, or start something new. Participants are shown using resources grounded in the interaction’s overall structural organization, the materials from the interaction-so-far, the mentionables they bring to interaction, and the situated environment itself. Comparing these alternatives, there’s suggestive quantitative evidence for a preference for continuation. The analysis of lapse resolution shows lapses as places for the management of multiple possible courses of action. Data are in U.S. and UK English.
  • Holler, J., Drijvers, L., Rafiee, A., & Majid, A. (2022). Embodied space-pitch associations are shaped by language. Cognitive Science, 46(2): e13083. doi:10.1111/cogs.13083.

    Abstract

    Height-pitch associations are claimed to be universal and independent of language, but this claim remains controversial. The present study sheds new light on this debate with a multimodal analysis of individual sound and melody descriptions obtained in an interactive communication paradigm with speakers of Dutch and Farsi. The findings reveal that, in contrast to Dutch speakers, Farsi speakers do not use a height-pitch metaphor consistently in speech. Both Dutch and Farsi speakers’ co-speech gestures did reveal a mapping of higher pitches to higher space and lower pitches to lower space, and this gesture space-pitch mapping tended to co-occur with corresponding spatial words (high-low). However, this mapping was much weaker in Farsi speakers than Dutch speakers. This suggests that cross-linguistic differences shape the conceptualization of pitch and further calls into question the universality of height-pitch associations.

    Additional information

    supporting information
  • Holler, J. (2022). Visual bodily signals as core devices for coordinating minds in interaction. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377(1859): 20210094. doi:10.1098/rstb.2021.0094.

    Abstract

    The view put forward here is that visual bodily signals play a core role in human communication and the coordination of minds. Critically, this role goes far beyond referential and propositional meaning. The human communication system that we consider to be the explanandum in the evolution of language thus is not spoken language. It is, instead, a deeply multimodal, multilayered, multifunctional system that developed—and survived—owing to the extraordinary flexibility and adaptability that it endows us with. Beyond their undisputed iconic power, visual bodily signals (manual and head gestures, facial expressions, gaze, torso movements) fundamentally contribute to key pragmatic processes in modern human communication. This contribution becomes particularly evident with a focus that includes non-iconic manual signals, non-manual signals and signal combinations. Such a focus also needs to consider meaning encoded not just via iconic mappings, since kinematic modulations and interaction-bound meaning are additional properties equipping the body with striking pragmatic capacities. Some of these capacities, or its precursors, may have already been present in the last common ancestor we share with the great apes and may qualify as early versions of the components constituting the hypothesized interaction engine.
  • Holler, J., Bavelas, J., Woods, J., Geiger, M., & Simons, L. (2022). Given-new effects on the duration of gestures and of words in face-to-face dialogue. Discourse Processes, 59(8), 619-645. doi:10.1080/0163853X.2022.2107859.

    Abstract

    The given-new contract entails that speakers must distinguish for their addressee whether references are new or already part of their dialogue. Past research had found that, in a monologue to a listener, speakers shortened repeated words. However, the notion of the given-new contract is inherently dialogic, with an addressee and the availability of co-speech gestures. Here, two face-to-face dialogue experiments tested whether gesture duration also follows the given-new contract. In Experiment 1, four experimental sequences confirmed that when speakers repeated their gestures, they shortened the duration significantly. Experiment 2 replicated the effect with spontaneous gestures in a different task. This experiment also extended earlier results with words, confirming that speakers shortened their repeated words significantly in a multimodal dialogue setting, the basic form of language use. Because words and gestures were not necessarily redundant, these results offer another instance in which gestures and words independently serve pragmatic requirements of dialogue.
  • Holler, J., & Levinson, S. C. (2019). Multimodal language processing in human communication. Trends in Cognitive Sciences, 23(8), 639-652. doi:10.1016/j.tics.2019.05.006.

    Abstract

    Multiple layers of visual (and vocal) signals, plus their different onsets and offsets, represent a significant semantic and temporal binding problem during face-to-face conversation.
    Despite this complex unification process, multimodal messages appear to be processed faster than unimodal messages.

    Multimodal gestalt recognition and multilevel prediction are proposed to play a crucial role in facilitating multimodal language processing.

    The basis of the processing mechanisms involved in multimodal language comprehension is hypothesized to be domain general, coopted for communication, and refined with domain-specific characteristics.
    A new, situated framework for understanding human language processing is called for that takes into consideration the multilayered, multimodal nature of language and its production and comprehension in conversational interaction requiring fast processing.
  • Holler, J., Kendrick, K. H., & Levinson, S. C. (2018). Processing language in face-to-face conversation: Questions with gestures get faster responses. Psychonomic Bulletin & Review, 25(5), 1900-1908. doi:10.3758/s13423-017-1363-z.

    Abstract

    The home of human language use is face-to-face interaction, a context in which communicative exchanges are characterised not only by bodily signals accompanying what is being said but also by a pattern of alternating turns at talk. This transition between turns is astonishingly fast—typically a mere 200-ms elapse between a current and a next speaker’s contribution—meaning that comprehending, producing, and coordinating conversational contributions in time is a significant challenge. This begs the question of whether the additional information carried by bodily signals facilitates or hinders language processing in this time-pressured environment. We present analyses of multimodal conversations revealing that bodily signals appear to profoundly influence language processing in interaction: Questions accompanied by gestures lead to shorter turn transition times—that is, to faster responses—than questions without gestures, and responses come earlier when gestures end before compared to after the question turn has ended. These findings hold even after taking into account prosodic patterns and other visual signals, such as gaze. The empirical findings presented here provide a first glimpse of the role of the body in the psycholinguistic processes underpinning human communication
  • Hömke, P., Holler, J., & Levinson, S. C. (2018). Eye blinks are perceived as communicative signals in human face-to-face interaction. PLoS One, 13(12): e0208030. doi:10.1371/journal.pone.0208030.

    Abstract

    In face-to-face communication, recurring intervals of mutual gaze allow listeners to provide speakers with visual feedback (e.g. nodding). Here, we investigate the potential feedback function of one of the subtlest of human movements—eye blinking. While blinking tends to be subliminal, the significance of mutual gaze in human interaction raises the question whether the interruption of mutual gaze through blinking may also be communicative. To answer this question, we developed a novel, virtual reality-based experimental paradigm, which enabled us to selectively manipulate blinking in a virtual listener, creating small differences in blink duration resulting in ‘short’ (208 ms) and ‘long’ (607 ms) blinks. We found that speakers unconsciously took into account the subtle differences in listeners’ blink duration, producing substantially shorter answers in response to long listener blinks. Our findings suggest that, in addition to physiological, perceptual and cognitive functions, listener blinks are also perceived as communicative signals, directly influencing speakers’ communicative behavior in face-to-face communication. More generally, these findings may be interpreted as shedding new light on the evolutionary origins of mental-state signaling, which is a crucial ingredient for achieving mutual understanding in everyday social interaction.

    Additional information

    Supporting information
  • Hoogman, M., Van Rooij, D., Klein, M., Boedhoe, P., Ilioska, I., Li, T., Patel, Y., Postema, M., Zhang-James, Y., Anagnostou, E., Arango, C., Auzias, G., Banaschewski, T., Bau, C. H. D., Behrmann, M., Bellgrove, M. A., Brandeis, D., Brem, S., Busatto, G. F., Calderoni, S. and 60 moreHoogman, M., Van Rooij, D., Klein, M., Boedhoe, P., Ilioska, I., Li, T., Patel, Y., Postema, M., Zhang-James, Y., Anagnostou, E., Arango, C., Auzias, G., Banaschewski, T., Bau, C. H. D., Behrmann, M., Bellgrove, M. A., Brandeis, D., Brem, S., Busatto, G. F., Calderoni, S., Calvo, R., Castellanos, F. X., Coghill, D., Conzelmann, A., Daly, E., Deruelle, C., Dinstein, I., Durston, S., Ecker, C., Ehrlich, S., Epstein, J. N., Fair, D. A., Fitzgerald, J., Freitag, C. M., Frodl, T., Gallagher, L., Grevet, E. H., Haavik, J., Hoekstra, P. J., Janssen, J., Karkashadze, G., King, J. A., Konrad, K., Kuntsi, J., Lazaro, L., Lerch, J. P., Lesch, K.-P., Louza, M. R., Luna, B., Mattos, P., McGrath, J., Muratori, F., Murphy, C., Nigg, J. T., Oberwelland-Weiss, E., O'Gorman Tuura, R. L., O'Hearn, K., Oosterlaan, J., Parellada, M., Pauli, P., Plessen, K. J., Ramos-Quiroga, J. A., Reif, A., Reneman, L., Retico, A., Rosa, P. G. P., Rubia, K., Shaw, P., Silk, T. J., Tamm, L., Vilarroya, O., Walitza, S., Jahanshad, N., Faraone, S. V., Francks, C., Van den Heuvel, O. A., Paus, T., Thompson, P. M., Buitelaar, J. K., & Franke, B. (2022). Consortium neuroscience of attention deficit/hyperactivity disorder and autism spectrum disorder: The ENIGMA adventure. Human Brain Mapping, 43(1), 37-55. doi:10.1002/hbm.25029.

    Abstract

    Abstract Neuroimaging has been extensively used to study brain structure and function in individuals with attention deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD) over the past decades. Two of the main shortcomings of the neuroimaging literature of these disorders are the small sample sizes employed and the heterogeneity of methods used. In 2013 and 2014, the ENIGMA-ADHD and ENIGMA-ASD working groups were respectively, founded with a common goal to address these limitations. Here, we provide a narrative review of the thus far completed and still ongoing projects of these working groups. Due to an implicitly hierarchical psychiatric diagnostic classification system, the fields of ADHD and ASD have developed largely in isolation, despite the considerable overlap in the occurrence of the disorders. The collaboration between the ENIGMA-ADHD and -ASD working groups seeks to bring the neuroimaging efforts of the two disorders closer together. The outcomes of case–control studies of subcortical and cortical structures showed that subcortical volumes are similarly affected in ASD and ADHD, albeit with small effect sizes. Cortical analyses identified unique differences in each disorder, but also considerable overlap between the two, specifically in cortical thickness. Ongoing work is examining alternative research questions, such as brain laterality, prediction of case–control status, and anatomical heterogeneity. In brief, great strides have been made toward fulfilling the aims of the ENIGMA collaborations, while new ideas and follow-up analyses continue that include more imaging modalities (diffusion MRI and resting-state functional MRI), collaborations with other large databases, and samples with dual diagnoses.
  • Horemans, I., & Schiller, N. O. (2004). Form-priming effects in nonword naming. Brain and Language, 90(1-3), 465-469. doi:10.1016/S0093-934X(03)00457-7.

    Abstract

    Form-priming effects from sublexical (syllabic or segmental) primes in masked priming can be accounted for in two ways. One is the sublexical pre-activation view according to which segments are pre-activated by the prime, and at the time the form-related target is to be produced, retrieval/assembly of those pre-activated segments is faster compared to an unrelated situation. However, it has also been argued that form-priming effects from sublexical primes might be due to lexical pre-activation. When the sublexical prime is presented, it activates all form-related words (i.e., cohorts) in the lexicon, necessarily including the form-related target, which—as a consequence—is produced faster than in the unrelated case. Note, however, that this lexical pre-activation account makes previous pre-lexical activation of segments necessary. This study reports a nonword naming experiment to investigate whether or not sublexical pre-activation is involved in masked form priming with sublexical primes. The results demonstrated a priming effect suggesting a nonlexical effect. However, this does not exclude an additional lexical component in form priming.
  • Hörpel, S. G., & Firzlaff, U. (2019). Processing of fast amplitude modulations in bat auditory cortex matches communication call-specific sound features. Journal of Neurophysiology, 121(4), 1501-1512. doi:10.1152/jn.00748.2018.
  • Howe, L. J., Lee, M. K., Sharp, G. C., Smith, G. D. W., St Pourcain, B., Shaffer, J. R., Ludwig, K. U., Mangold, E., Marazita, M. L., Feingold, E., Zhurov, A., Stergiakouli, E., Sandy, J., Richmond, S., Weinberg, S. M., Hemani, G., & Lewis, S. J. (2018). Investigating the shared genetics of non-syndromic cleft lip/palate and facial morphology. PLoS Genetics, 14(8): e1007501. doi:10.1371/journal.pgen.1007501.

    Abstract

    There is increasing evidence that genetic risk variants for non-syndromic cleft lip/palate (nsCL/P) are also associated with normal-range variation in facial morphology. However, previous analyses are mostly limited to candidate SNPs and findings have not been consistently replicated. Here, we used polygenic risk scores (PRS) to test for genetic overlap between nsCL/P and seven biologically relevant facial phenotypes. Where evidence was found of genetic overlap, we used bidirectional Mendelian randomization (MR) to test the hypothesis that genetic liability to nsCL/P is causally related to implicated facial phenotypes. Across 5,804 individuals of European ancestry from two studies, we found strong evidence, using PRS, of genetic overlap between nsCL/P and philtrum width; a 1 S.D. increase in nsCL/P PRS was associated with a 0.10 mm decrease in philtrum width (95% C.I. 0.054, 0.146; P = 2x10-5). Follow-up MR analyses supported a causal relationship; genetic variants for nsCL/P homogeneously cause decreased philtrum width. In addition to the primary analysis, we also identified two novel risk loci for philtrum width at 5q22.2 and 7p15.2 in our Genome-wide Association Study (GWAS) of 6,136 individuals. Our results support a liability threshold model of inheritance for nsCL/P, related to abnormalities in development of the philtrum.
  • Howe, L., Lawson, D. J., Davies, N. M., St Pourcain, B., Lewis, S. J., Smith, G. D., & Hemani, G. (2019). Genetic evidence for assortative mating on alcohol consumption in the UK Biobank. Nature Communications, 10: 5039. doi:10.1038/s41467-019-12424-x.

    Abstract

    Alcohol use is correlated within spouse-pairs, but it is difficult to disentangle effects of alcohol consumption on mate-selection from social factors or the shared spousal environment. We hypothesised that genetic variants related to alcohol consumption may, via their effect on alcohol behaviour, influence mate selection. Here, we find strong evidence that an individual’s self-reported alcohol consumption and their genotype at rs1229984, a missense variant in ADH1B, are associated with their partner’s self-reported alcohol use. Applying Mendelian randomization, we estimate that a unit increase in an individual’s weekly alcohol consumption increases partner’s alcohol consumption by 0.26 units (95% C.I. 0.15, 0.38; P = 8.20 × 10−6). Furthermore, we find evidence of spousal genotypic concordance for rs1229984, suggesting that spousal concordance for alcohol consumption existed prior to cohabitation. Although the SNP is strongly associated with ancestry, our results suggest some concordance independent of population stratification. Our findings suggest that alcohol behaviour directly influences mate selection.
  • Howe, L. J., Richardson, T. G., Arathimos, R., Alvizi, L., Passos-Bueno, M. R., Stanier, P., Nohr, E., Ludwig, K. U., Mangold, E., Knapp, M., Stergiakouli, E., St Pourcain, B., Smith, G. D., Sandy, J., Relton, C. L., Lewis, S. J., Hemani, G., & Sharp, G. C. (2019). Evidence for DNA methylation mediating genetic liability to non-syndromic cleft lip/palate. Epigenomics, 11(2), 133-145. doi:10.2217/epi-2018-0091.

    Abstract

    Aim: To determine if nonsyndromic cleft lip with or without cleft palate (nsCL/P) genetic risk variants influence liability to nsCL/P through gene regulation pathways, such as those involving DNA methylation. Materials & methods: nsCL/P genetic summary data and methylation data from four studies were used in conjunction with Mendelian randomization and joint likelihood mapping to investigate potential mediation of nsCL/P genetic variants. Results & conclusion: Evidence was found at VAX1 (10q25.3), LOC146880 (17q23.3) and NTN1 (17p13.1), that liability to nsCL/P and variation in DNA methylation might be driven by the same genetic variant, suggesting that genetic variation at these loci may increase liability to nsCL/P by influencing DNA methylation. Follow-up analyses using different tissues and gene expression data provided further insight into possible biological mechanisms.

    Additional information

    Supplementary material
  • Hoymann, G. (2004). [Review of the book Botswana: The future of the minority languages ed. by Herman M. Batibo and Birgit Smieja]. Journal of African Languages and Linguistics, 25(2), 171-173. doi:10.1515/jall.2004.25.2.171.
  • Hubbard, R. J., Rommers, J., Jacobs, C. L., & Federmeier, K. D. (2019). Downstream behavioral and electrophysiological consequences of word prediction on recognition memory. Frontiers in Human Neuroscience, 13: 291. doi:10.3389/fnhum.2019.00291.

    Abstract

    When people process language, they can use context to predict upcoming information,
    influencing processing and comprehension as seen in both behavioral and neural
    measures. Although numerous studies have shown immediate facilitative effects
    of confirmed predictions, the downstream consequences of prediction have been
    less explored. In the current study, we examined those consequences by probing
    participants’ recognition memory for words after they read sets of sentences.
    Participants read strongly and weakly constraining sentences with expected or
    unexpected endings (“I added my name to the list/basket”), and later were tested on
    their memory for the sentence endings while EEG was recorded. Critically, the memory
    test contained words that were predictable (“list”) but were never read (participants
    saw “basket”). Behaviorally, participants showed successful discrimination between old
    and new items, but false alarmed to the expected-item lures more often than to new
    items, showing that predicted words or concepts can linger, even when predictions
    are disconfirmed. Although false alarm rates did not differ by constraint, event-related
    potentials (ERPs) differed between false alarms to strongly and weakly predictable words.
    Additionally, previously unexpected (compared to previously expected) endings that
    appeared on the memory test elicited larger N1 and LPC amplitudes, suggesting greater
    attention and episodic recollection. In contrast, highly predictable sentence endings that
    had been read elicited reduced LPC amplitudes during the memory test. Thus, prediction
    can facilitate processing in the moment, but can also lead to false memory and reduced
    recollection for predictable information.
  • Hubers, F., Cucchiarini, C., Strik, H., & Dijkstra, T. (2019). Normative data of Dutch idiomatic expressions: Subjective judgments you can bank on. Frontiers in Psychology, 10: 1075. doi:10.3389/fpsyg.2019.01075.

    Abstract

    The processing of idiomatic expressions is a topical issue in empirical research. Various factors have been found to influence idiom processing, such as idiom familiarity and idiom transparency. Information on these variables is usually obtained through norming studies. Studies investigating the effect of various properties on idiom processing have led to ambiguous results. This may be due to the variability of operationalizations of the idiom properties across norming studies, which in turn may affect the reliability of the subjective judgements. However, not all studies that collected normative data on idiomatic expressions investigated their reliability, and studies that did address the reliability of subjective ratings used various measures and produced mixed results. In this study, we investigated the reliability of subjective judgements, the relation between subjective and objective idiom frequency, and the impact of these dimensions on the participants’ idiom knowledge by collecting normative data of five subjective idiom properties (Frequency of Exposure, Meaning Familiarity, Frequency of Usage, Transparency, and Imageability) from 390 native speakers and objective corpus frequency for 374 Dutch idiomatic expressions. For reliability, we compared measures calculated in previous studies, with the D-coefficient, a metric taken from Generalizability Theory. High reliability was found for all subjective dimensions. One reliability metric, Krippendorff’s alpha, generally produced lower values, while similar values were obtained for three other measures (Cronbach’s alpha, Intraclass Correlation Coefficient, and the D-coefficient). Advantages of the D-coefficient are that it can be applied to unbalanced research designs, and to estimate the minimum number of raters required to obtain reliable ratings. Slightly higher coefficients were observed for so-called experience-based dimensions (Frequency of Exposure, Meaning Familiarity, and Frequency of Usage) than for content-based dimensions (Transparency and Imageability). In addition, fewer raters were required to obtain reliable ratings for the experience-based dimensions. Subjective and objective frequency appeared to be poorly correlated, while all subjective idiom properties and objective frequency turned out to affect idiom knowledge. Meaning Familiarity, Subjective and Objective Frequency of Exposure, Frequency of Usage, and Transparency positively contributed to idiom knowledge, while a negative effect was found for Imageability. We discuss these relationships in more detail, and give methodological recommendations with respect to the procedures and the measure to calculate reliability.

    Additional information

    supplementary material
  • Huettig, F., Kolinsky, R., & Lachmann, T. (2018). The culturally co-opted brain: How literacy affects the human mind. Language, Cognition and Neuroscience, 33(3), 275-277. doi:10.1080/23273798.2018.1425803.

    Abstract

    Introduction to the special issue 'The Effects of Literacy on Cognition and Brain Functioning'
  • Huettig, F., Kolinsky, R., & Lachmann, T. (Eds.). (2018). The effects of literacy on cognition and brain functioning [Special Issue]. Language, Cognition and Neuroscience, 33(3).
  • Huettig, F., & Pickering, M. (2019). Literacy advantages beyond reading: Prediction of spoken language. Trends in Cognitive Sciences, 23(6), 464-475. doi:10.1016/j.tics.2019.03.008.

    Abstract

    Literacy has many obvious benefits—it exposes the reader to a wealth of new information and enhances syntactic knowledge. However, we argue that literacy has an additional, often overlooked, benefit: it enhances people’s ability to predict spoken language thereby aiding comprehension. Readers are under pressure to process information more quickly than listeners, and reading provides excellent conditions, in particular a stable environment, for training the predictive system. It also leads to increased awareness of words as linguistic units, and more fine-grained phonological and additional orthographic representations, which sharpen lexical representations and facilitate predicted representations to be retrieved. Thus, reading trains core processes and representations involved in language prediction that are common to both reading and listening.
  • Huettig, F., Audring, J., & Jackendoff, R. (2022). A parallel architecture perspective on pre-activation and prediction in language processing. Cognition, 224: 105050. doi:10.1016/j.cognition.2022.105050.

    Abstract

    A recent trend in psycholinguistic research has been to posit prediction as an essential function of language processing. The present paper develops a linguistic perspective on viewing prediction in terms of pre-activation. We describe what predictions are and how they are produced. Our basic premises are that (a) no prediction can be made without knowledge to support it; and (b) it is therefore necessary to characterize the precise form of that knowledge, as revealed by a suitable theory of linguistic representations. We describe the Parallel Architecture (PA: Jackendoff, 2002; Jackendoff and Audring, 2020), which makes explicit our commitments about linguistic representations, and we develop an account of processing based on these representations. Crucial to our account is that what have been traditionally treated as derivational rules of grammar are formalized by the PA as lexical items, encoded in the same format as words. We then present a theory of prediction in these terms: linguistic input activates lexical items whose beginning (or incipit) corresponds to the input encountered so far; and prediction amounts to pre-activation of the as yet unheard parts of those lexical items (the remainder). Thus the generation of predictions is a natural byproduct of processing linguistic representations. We conclude that the PA perspective on pre-activation provides a plausible account of prediction in language processing that bridges linguistic and psycholinguistic theorizing.
  • Huettig, F., & Guerra, E. (2019). Effects of speech rate, preview time of visual context, and participant instructions reveal strong limits on prediction in language processing. Brain Research, 1706, 196-208. doi:10.1016/j.brainres.2018.11.013.

    Abstract

    There is a consensus among language researchers that people can predict upcoming language. But do people always predict when comprehending language? Notions that “brains … are essentially prediction machines” certainly suggest so. In three eye-tracking experiments we tested this view. Participants listened to simple Dutch sentences (‘Look at the displayed bicycle’) while viewing four objects (a target, e.g. a bicycle, and three unrelated distractors). We used the identical visual stimuli and the same spoken sentences but varied speech rates, preview time, and participant instructions. Target nouns were preceded by definite gender-marked determiners, which allowed participants to predict the target object because only the targets but not the distractors agreed in gender with the determiner. In Experiment 1, participants had four seconds preview and sentences were presented either in a slow or a normal speech rate. Participants predicted the targets as soon as they heard the determiner in both conditions. Experiment 2 was identical except that participants were given only a one second preview. Participants predicted the targets only in the slow speech condition. Experiment 3 was identical to Experiment 2 except that participants were explicitly told to predict. This led only to a small prediction effect in the normal speech condition. Thus, a normal speech rate only afforded prediction if participants had an extensive preview. Even the explicit instruction to predict the target resulted in only a small anticipation effect with a normal speech rate and a short preview. These findings are problematic for theoretical proposals that assume that prediction pervades cognition.
  • Huettig, F., Lachmann, T., Reis, A., & Petersson, K. M. (2018). Distinguishing cause from effect - Many deficits associated with developmental dyslexia may be a consequence of reduced and suboptimal reading experience. Language, Cognition and Neuroscience, 33(3), 333-350. doi:10.1080/23273798.2017.1348528.

    Abstract

    The cause of developmental dyslexia is still unknown despite decades of intense research. Many causal explanations have been proposed, based on the range of impairments displayed by affected individuals. Here we draw attention to the fact that many of these impairments are also shown by illiterate individuals who have not received any or very little reading instruction. We suggest that this fact may not be coincidental and that the performance differences of both illiterates and individuals with dyslexia compared to literate controls are, to a substantial extent, secondary consequences of either reduced or suboptimal reading experience or a combination of both. The search for the primary causes of reading impairments will make progress if the consequences of quantitative and qualitative differences in reading experience are better taken into account and not mistaken for the causes of reading disorders. We close by providing four recommendations for future research.
  • Huisman, J. L. A., & Majid, A. (2018). Psycholinguistic variables matter in odor naming. Memory & Cognition, 46, 577-588. doi:10.3758/s13421-017-0785-1.

    Abstract

    People from Western societies generally find it difficult to name odors. In trying to explain this, the olfactory literature has proposed several theories that focus heavily on properties of the odor itself but rarely discuss properties of the label used to describe it. However, recent studies show speakers of languages with dedicated smell lexicons can name odors with relative ease. Has the role of the lexicon been overlooked in the olfactory literature? Word production studies show properties of the label, such as word frequency and semantic context, influence naming; but this field of research focuses heavily on the visual domain. The current study combines methods from both fields to investigate word production for olfaction in two experiments. In the first experiment, participants named odors whose veridical labels were either high-frequency or low-frequency words in Dutch, and we found that odors with high-frequency labels were named correctly more often. In the second experiment, edibility was used for manipulating semantic context in search of a semantic interference effect, presenting the odors in blocks of edible and inedible odor source objects to half of the participants. While no evidence was found for a semantic interference effect, an effect of word frequency was again present. Our results demonstrate psycholinguistic variables—such as word frequency—are relevant for olfactory naming, and may, in part, explain why it is difficult to name odors in certain languages. Olfactory researchers cannot afford to ignore properties of an odor’s label.
  • Huisman, J. L. A., Majid, A., & Van Hout, R. (2019). The geographical configuration of a language area influences linguistic diversity. PLoS One, 14(6): e0217363. doi:10.1371/journal.pone.0217363.

    Abstract

    Like the transfer of genetic variation through gene flow, language changes constantly as a result of its use in human interaction. Contact between speakers is most likely to happen when they are close in space, time, and social setting. Here, we investigated the role of geographical configuration in this process by studying linguistic diversity in Japan, which comprises a large connected mainland (less isolation, more potential contact) and smaller island clusters of the Ryukyuan archipelago (more isolation, less potential contact). We quantified linguistic diversity using dialectometric methods, and performed regression analyses to assess the extent to which distance in space and time predict contemporary linguistic diversity. We found that language diversity in general increases as geographic distance increases and as time passes—as with biodiversity. Moreover, we found that (I) for mainland languages, linguistic diversity is most strongly related to geographic distance—a so-called isolation-by-distance pattern, and that (II) for island languages, linguistic diversity reflects the time since varieties separated and diverged—an isolation-by-colonisation pattern. Together, these results confirm previous findings that (linguistic) diversity is shaped by distance, but also goes beyond this by demonstrating the critical role of geographic configuration.
  • Huizeling, E., Arana, S., Hagoort, P., & Schoffelen, J.-M. (2022). Lexical frequency and sentence context influence the brain’s response to single words. Neurobiology of Language, 3(1), 149-179. doi:10.1162/nol_a_00054.

    Abstract

    Typical adults read remarkably quickly. Such fast reading is facilitated by brain processes that are sensitive to both word frequency and contextual constraints. It is debated as to whether these attributes have additive or interactive effects on language processing in the brain. We investigated this issue by analysing existing magnetoencephalography data from 99 participants reading intact and scrambled sentences. Using a cross-validated model comparison scheme, we found that lexical frequency predicted the word-by-word elicited MEG signal in a widespread cortical network, irrespective of sentential context. In contrast, index (ordinal word position) was more strongly encoded in sentence words, in left front-temporal areas. This confirms that frequency influences word processing independently of predictability, and that contextual constraints affect word-by-word brain responses. With a conservative multiple comparisons correction, only the interaction between lexical frequency and surprisal survived, in anterior temporal and frontal cortex, and not between lexical frequency and entropy, nor between lexical frequency and index. However, interestingly, the uncorrected index*frequency interaction revealed an effect in left frontal and temporal cortex that reversed in time and space for intact compared to scrambled sentences. Finally, we provide evidence to suggest that, in sentences, lexical frequency and predictability may independently influence early (<150ms) and late stages of word processing, but interact during later stages of word processing (>150-250ms), thus helping to converge previous contradictory eye-tracking and electrophysiological literature. Current neuro-cognitive models of reading would benefit from accounting for these differing effects of lexical frequency and predictability on different stages of word processing.
  • Huizeling, E., Peeters, D., & Hagoort, P. (2022). Prediction of upcoming speech under fluent and disfluent conditions: Eye tracking evidence from immersive virtual reality. Language, Cognition and Neuroscience, 37(4), 481-508. doi:10.1080/23273798.2021.1994621.

    Abstract

    Traditional experiments indicate that prediction is important for efficient speech processing. In three virtual reality visual world paradigm experiments, we tested whether such findings hold in naturalistic settings (Experiment 1) and provided novel insights into whether disfluencies in speech (repairs/hesitations) inform one’s predictions in rich environments (Experiments 2–3). Experiment 1 supports that listeners predict upcoming speech in naturalistic environments, with higher proportions of anticipatory target fixations in predictable compared to unpredictable trials. In Experiments 2–3, disfluencies reduced anticipatory fixations towards predicted referents, compared to conjunction (Experiment 2) and fluent (Experiment 3) sentences. Unexpectedly, Experiment 2 provided no evidence that participants made new predictions from a repaired verb. Experiment 3 provided novel findings that fixations towards the speaker increase upon hearing a hesitation, supporting current theories of how hesitations influence sentence processing. Together, these findings unpack listeners’ use of visual (objects/speaker) and auditory (speech/disfluencies) information when predicting upcoming words.
  • Hulten, A., Schoffelen, J.-M., Udden, J., Lam, N. H. L., & Hagoort, P. (2019). How the brain makes sense beyond the processing of single words – An MEG study. NeuroImage, 186, 586-594. doi:10.1016/j.neuroimage.2018.11.035.

    Abstract

    Human language processing involves combinatorial operations that make human communication stand out in the animal kingdom. These operations rely on a dynamic interplay between the inferior frontal and the posterior temporal cortices. Using source reconstructed magnetoencephalography, we tracked language processing in the brain, in order to investigate how individual words are interpreted when part of sentence context. The large sample size in this study (n = 68) allowed us to assess how event-related activity is associated across distinct cortical areas, by means of inter-areal co-modulation within an individual. We showed that, within 500 ms of seeing a word, the word's lexical information has been retrieved and unified with the sentence context. This does not happen in a strictly feed-forward manner, but by means of co-modulation between the left posterior temporal cortex (LPTC) and left inferior frontal cortex (LIFC), for each individual word. The co-modulation of LIFC and LPTC occurs around 400 ms after the onset of each word, across the progression of a sentence. Moreover, these core language areas are supported early on by the attentional network. The results provide a detailed description of the temporal orchestration related to single word processing in the context of ongoing language.

    Additional information

    1-s2.0-S1053811918321165-mmc1.pdf
  • Hustá, C., Dalmaijer, E., Belopolsky, A., & Mathôt, S. (2019). The pupillary light response reflects visual working memory content. Journal of Experimental Psychology: Human Perception and Performance, 45(11), 1522-1528. doi:10.1037/xhp0000689.

    Abstract

    Recent studies have shown that the pupillary light response (PLR) is modulated by higher cognitive functions, presumably through activity in visual sensory brain areas. Here we use the PLR to test the involvement of sensory areas in visual working memory (VWM). In two experiments, participants memorized either bright or dark stimuli. We found that pupils were smaller when a prestimulus cue indicated that a bright stimulus should be memorized; this reflects a covert shift of attention during encoding of items into VWM. Crucially, we obtained the same result with a poststimulus cue, which shows that internal shifts of attention within VWM affect pupil size as well. Strikingly, the effect of VWM content on pupil size was most pronounced immediately after the poststimulus cue, and then dissipated. This suggests that a shift of attention within VWM momentarily activates an "active" memory representation, but that this representation quickly transforms into a "hidden" state that does not rely on sensory areas.

    Additional information

    Supplementary_xhp0000689.docx
  • Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2019). How in-group bias influences source memory for words learned from in-group and out-group speakers. Frontiers in Human Neuroscience, 13: 308. doi:10.3389/fnhum.2019.00308.

    Abstract

    Individuals rapidly extract information about others’ social identity, including whether or not they belong to their in-group. Group membership status has been shown to affect how attentively people encode information conveyed by those others. These findings are highly relevant for the field of psycholinguistics where there exists an open debate on how words are represented in the mental lexicon and how abstract or context-specific these representations are. Here, we used a novel word learning paradigm to test our proposal that the group membership status of speakers also affects how speaker-specific representations of novel words are. Participants learned new words from speakers who either attended their own university (in-group speakers) or did not (out-group speakers) and performed a task to measure their individual in-group bias. Then, their source memory of the new words was tested in a recognition test to probe the speaker-specific content of the novel lexical representations and assess how it related to individual in-group biases. We found that speaker group membership and participants’ in-group bias affected participants’ decision biases. The stronger the in-group bias, the more cautious participants were in their decisions. This was particularly applied to in-group related decisions. These findings indicate that social biases can influence recognition threshold. Taking a broader scope, defining how information is represented is a topic of great overlap between the fields of memory and psycholinguistics. Nevertheless, researchers from these fields tend to stay within the theoretical and methodological borders of their own field, missing the chance to deepen their understanding of phenomena that are of common interest. Here we show how methodologies developed in the memory field can be implemented in language research to shed light on an important theoretical issue that relates to the composition of lexical representations.

    Additional information

    Supplementary material
  • Inacio, F., Faisca, L., Forkstam, C., Araujo, S., Bramao, I., Reis, A., & Petersson, K. M. (2018). Implicit sequence learning is preserved in dyslexic children. Annals of Dyslexia, 68(1), 1-14. doi:10.1007/s11881-018-0158-x.

    Abstract

    This study investigates the implicit sequence learning abilities of dyslexic children using an artificial grammar learning task with an extended exposure period. Twenty children with developmental dyslexia participated in the study and were matched with two control groups—one matched for age and other for reading skills. During 3 days, all participants performed an acquisition task, where they were exposed to colored geometrical forms sequences with an underlying grammatical structure. On the last day, after the acquisition task, participants were tested in a grammaticality classification task. Implicit sequence learning was present in dyslexic children, as well as in both control groups, and no differences between groups were observed. These results suggest that implicit learning deficits per se cannot explain the characteristic reading difficulties of the dyslexics.
  • Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144. doi:10.1016/j.cognition.2002.06.001.

    Abstract

    This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
  • Indefrey, P., Hellwig, F. M., Herzog, H., Seitz, R. J., & Hagoort, P. (2004). Neural responses to the production and comprehension of syntax in identical utterances. Brain and Language, 89(2), 312-319. doi:10.1016/S0093-934X(03)00352-3.

    Abstract

    Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment, subjects described visually presented scenes
    using either sentences, sequences of noun phrases, or sequences of syntactically unrelated words. In the language comprehension part of the experiment, subjects were auditorily presented with the same kinds of utterances and judged whether they matched the visual scenes. We were able to replicate the previous finding of a region in caudal Broca s area that is sensitive to the complexity of
    syntactic encoding in language production. In language comprehension, no hemodynamic activation differences due to syntactic complexity were found. Given that correct performance in the judgment task did not require syntactic processing of the auditory stimuli, the results suggest that the degree to which listeners recruit syntactic processing resources in language comprehension may be a function of the syntactic demands of the task or the stimulus material.
  • Ioumpa, K., Graham, S. A., Clausner, T., Fisher, S. E., Van Lier, R., & Van Leeuwen, T. M. (2019). Enhanced self-reported affect and prosocial behaviour without differential physiological responses in mirror-sensory synaesthesia. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190395. doi:10.1098/rstb.2019.0395.

    Abstract

    Mirror-sensory synaesthetes mirror the pain or touch that they observe in other people on their own bodies. This type of synaesthesia has been associated with enhanced empathy. We investigated whether the enhanced empathy of people with mirror-sensory synesthesia influences the experience of situations involving touch or pain and whether it affects their prosocial decision making. Mirror-sensory synaesthetes (N = 18, all female), verified with a touch-interference paradigm, were compared with a similar number of age-matched control individuals (all female). Participants viewed arousing images depicting pain or touch; we recorded subjective valence and arousal ratings, and physiological responses, hypothesizing more extreme reactions in synaesthetes. The subjective impact of positive and negative images was stronger in synaesthetes than in control participants; the stronger the reported synaesthesia, the more extreme the picture ratings. However, there was no evidence for differential physiological or hormonal responses to arousing pictures. Prosocial decision making was assessed with an economic game assessing altruism, in which participants had to divide money between themselves and a second player. Mirror-sensory synaesthetes donated more money than non-synaesthetes, showing enhanced prosocial behaviour, and also scored higher on the Interpersonal Reactivity Index as a measure of empathy. Our study demonstrates the subjective impact of mirror-sensory synaesthesia and its stimulating influence on prosocial behaviour.

    Files private

    Request files
  • Isbilen, E. S., Frost, R. L. A., Monaghan, P., & Christiansen, M. H. (2022). Statistically based chunking of nonadjacent dependencies. Journal of Experimental Psychology: General, 151(11), 2623-2640. doi:10.1037/xge0001207.

    Abstract

    How individuals learn complex regularities in the environment and generalize them to new instances is a key question in cognitive science. Although previous investigations have advocated the idea that learning and generalizing depend upon separate processes, the same basic learning mechanisms may account for both. In language learning experiments, these mechanisms have typically been studied in isolation of broader cognitive phenomena such as memory, perception, and attention. Here, we show how learning and generalization in language is embedded in these broader theories by testing learners on their ability to chunk nonadjacent dependencies—a key structure in language but a challenge to theories that posit learning through the memorization of structure. In two studies, adult participants were trained and tested on an artificial language containing nonadjacent syllable dependencies, using a novel chunking-based serial recall task involving verbal repetition of target sequences (formed from learned strings) and scrambled foils. Participants recalled significantly more syllables, bigrams, trigrams, and nonadjacent dependencies from sequences conforming to the language’s statistics (both learned and generalized sequences). They also encoded and generalized specific nonadjacent chunk information. These results suggest that participants chunk remote dependencies and rapidly generalize this information to novel structures. The results thus provide further support for learning-based approaches to language acquisition, and link statistical learning to broader cognitive mechanisms of memory.
  • Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F. M., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727-741. doi:10.1162/089892904970708.

    Abstract

    In order to separate the cognitive processes associated with phonological encoding and the use of a visual word form lexicon in reading, it is desirable to compare the processing of words presented in a visually familiar form with words in a visually unfamiliar form. Japanese Kana orthography offers this possibility. Two phonologically equivalent but visually dissimilar syllabaries allow the writing of, for example, foreign loanwords in two ways, only one of which is visually familiar. Familiarly written words, unfamiliarly written words, and pseudowords were presented in both Kana syllabaries (yielding six conditions in total) to participants during an fMRI measurement with a silent articulation task (Experiment 1) and a phonological lexical decision task (Experiment 2) using an event-related design. Consistent over two experimental tasks, the three different stimulus types (familiar, unfamiliar, and pseudoword) were found to activate selectively different brain regions previously associated with phonological encoding and word retrieval or meaning. Compatible with the predictions of the dual-route model for reading, pseudowords and visually unfamiliar words, which have to be read using phonological assembly, caused an increase in brain activity in left inferior frontal regions (BA 44/47), as compared to visually familiar words. Visually familiar and unfamiliar words were found to activate a range of areas associated with lexico-semantic processing more strongly than pseudowords, such as the left and right temporo-parietal region (BA 39/40), a region in the left middle/inferior temporal gyrus (BA 20/21), and the posterior cingulate (BA 31).
  • Iyer, S., Sam, F. S., DiPrimio, N., Preston, G., Verheijen, J., Murthy, K., Parton, Z., Tsang, H., Lao, J., Morava, E., & Perlstein, E. O. (2019). Repurposing the aldose reductase inhibitor and diabetic neuropathy drug epalrestat for the congenital disorder of glycosylation PMM2-CDG. Disease models & mechanisms, 12(11): UNSP dmm040584. doi:10.1242/dmm.040584.

    Abstract

    Phosphomannomutase 2 deficiency, or PMM2-CDG, is the most common congenital disorder of glycosylation and affects over 1000 patients globally. There are no approved drugs that treat the symptoms or root cause of PMM2-CDG. To identify clinically actionable compounds that boost human PMM2 enzyme function, we performed a multispecies drug repurposing screen using a novel worm model of PMM2-CDG, followed by PMM2 enzyme functional studies in PMM2-CDG patient fibroblasts. Drug repurposing candidates from this study, and drug repurposing candidates from a previously published study using yeast models of PMM2-CDG, were tested for their effect on human PMM2 enzyme activity in PMM2-CDG fibroblasts. Of the 20 repurposing candidates discovered in the worm-based phenotypic screen, 12 were plant-based polyphenols. Insights from structure-activity relationships revealed epalrestat, the only antidiabetic aldose reductase inhibitor approved for use in humans, as a first-in-class PMM2 enzyme activator. Epalrestat increased PMM2 enzymatic activity in four PMM2-CDG patient fibroblast lines with genotypes R141H/F119L, R141H/E139K, R141H/N216I and R141H/F183S. PMM2 enzyme activity gains ranged from 30% to 400% over baseline, depending on genotype. Pharmacological inhibition of aldose reductase by epalrestat may shunt glucose from the polyol pathway to glucose-1,6-bisphosphate, which is an endogenous stabilizer and coactivator of PMM2 homodimerization. Epalrestat is a safe, oral and brain penetrant drug that was approved 27 years ago in Japan to treat diabetic neuropathy in geriatric populations. We demonstrate that epalrestat is the first small molecule activator ofPMM2 enzyme activity with the potential to treat peripheral neuropathy and correct the underlying enzyme deficiency in a majority of pediatric and adult PMM2-CDG patients.

    Additional information

    DMM040584supp.pdf
  • Jackson, C. N., Mormer, E., & Brehm, L. (2018). The production of subject-verb agreement among Swedish and Chinese second language speakers of English. Studies in Second Language Acquisition, 40(4), 907-921. doi: 10.1017/S0272263118000025.

    Abstract

    This study uses a sentence completion task with Swedish and Chinese L2 English speakers to investigate how L1 morphosyntax and L2 proficiency influence L2 English subject-verb agreement production. Chinese has limited nominal and verbal number morphology, while Swedish has robust noun phrase (NP) morphology but does not number-mark verbs. Results showed that like L1 English speakers, both L2 groups used grammatical and conceptual number to produce subject-verb agreement. However, only L1 Chinese speakers—and less-proficient speakers in both L2 groups—were similarly influenced by grammatical and conceptual number when producing the subject NP. These findings demonstrate how L2 proficiency, perhaps combined with cross-linguistic differences, influence L2 production and underscore that encoding of noun and verb number are not independent.
  • Jacobs, A. M., & Willems, R. M. (2018). The fictive brain: Neurocognitive correlates of engagement in literature. Review of General Psychology, 22(2), 147-160. doi:10.1037/gpr0000106.

    Abstract

    Fiction is vital to our being. Many people enjoy engaging with fiction every day. Here we focus on literary reading as 1 instance of fiction consumption from a cognitive neuroscience perspective. The brain processes which play a role in the mental construction of fiction worlds and the related engagement with fictional characters, remain largely unknown. The authors discuss the neurocognitive poetics model (Jacobs, 2015a) of literary reading specifying the likely neuronal correlates of several key processes in literary reading, namely inference and situation model building, immersion, mental simulation and imagery, figurative language and style, and the issue of distinguishing fact from fiction. An overview of recent work on these key processes is followed by a discussion of methodological challenges in studying the brain bases of fiction processing
  • Jadoul, Y., Thompson, B., & De Boer, B. (2018). Introducing Parselmouth: A Python interface to Praat. Journal of Phonetics, 71, 1-15. doi:10.1016/j.wocn.2018.07.001.

    Abstract

    This paper introduces Parselmouth, an open-source Python library that facilitates access to core functionality of Praat in Python, in an efficient and programmer-friendly way. We introduce and motivate the package, and present simple usage examples. Specifically, we focus on applications in data visualisation, file manipulation, audio manipulation, statistical analysis, and integration of Parselmouth into a Python-based experimental design for automated, in-the-loop manipulation of acoustic data. Parselmouth is available at https://github.com/YannickJadoul/Parselmouth.
  • Janse, E., & Klitsch, J. (2004). Auditieve perceptie bij gezonde sprekers en bij sprekers met verworven taalstoornissen. Afasiologie, 26(1), 2-6.
  • Janse, E. (2004). Word perception in fast speech: Artificially time-compressed vs. naturally produced fast speech. Speech Communication, 42, 155-173. doi:10.1016/j.specom.2003.07.001.

    Abstract

    Natural fast speech differs from normal-rate speech with respect to its temporal pattern. Previous results showed that word intelligibility of heavily artificially time-compressed speech could not be improved by making its temporal pattern more similar to that of natural fast speech. This might have been due to the extrapolation of timing rules for natural fast speech to rates that are much faster than can be attained by human speakers. The present study investigates whether, at a speech rate that human speakers can attain, artificially time-compressed speech is easier to process if its timing pattern is similar to that of naturally produced fast speech. Our first experiment suggests, however, that word processing speed was slowed down, relative to linear compression. In a second experiment, word processing of artificially time-compressed speech was compared with processing of naturally produced fast speech. Even when naturally produced fast speech is perfectly intelligible, its less careful articulation, combined with the changed timing pattern, slows down processing, relative to linearly time-compressed speech. Furthermore, listeners preferred artificially time-compressed speech over naturally produced fast speech. These results suggest that linearly time-compressed speech has both a temporal and a segmental advantage over natural fast speech.
  • Jansma, B. M., & Schiller, N. O. (2004). Monitoring syllable boundaries during speech production. Brain and Language, 90(1-3), 311-317. doi:10.1016/S0093-934X(03)00443-7.

    Abstract

    This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a selfmonitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No canoe vs. kaN.sel pulpit ; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt 's model of phonological encoding.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2004). Stem complexity and inflectional encoding in language production. Journal of Psycholinguistic Research, 33(5), 365-381. doi:10.1023/B:JOPR.0000039546.60121.a8.

    Abstract

    Three experiments are reported that examined whether stem complexity plays a role in inflecting polymorphemic words in language production. Experiment 1 showed that preparation effects for words with polymorphemic stems are larger when they are produced among words with constant inflectional structures compared to words with variable inflectional structures and simple stems. This replicates earlier findings for words with monomorphemic stems (Janssen et al., 2002). Experiments 2 and 3 showed that when inflectional structure is held constant, the preparation effects are equally large with simple and compound stems, and with compound and complex adjectival stems. These results indicate that inflectional encoding is blind to the complexity of the stem, which suggests that specific inflectional rather than generic morphological frames guide the generation of inflected forms in speaking words.
  • Janssen, C., Segers, E., McQueen, J. M., & Verhoeven, L. (2019). Comparing effects of instruction on word meaning and word form on early literacy abilities in kindergarten. Early Education and Development, 30(3), 375-399. doi:10.1080/10409289.2018.1547563.

    Abstract

    Research Findings: The present study compared effects of explicit instruction on and practice with the phonological form of words (form-focused instruction) versus explicit instruction on and practice with the meaning of words (meaning-focused instruction). Instruction was given via interactive storybook reading in the kindergarten classroom of children learning Dutch. We asked whether the 2 types of instruction had different effects on vocabulary development and 2 precursors of reading ability—phonological awareness and letter knowledge—and we examined effects on these measures of the ability to learn new words with minimal acoustic-phonetic differences. Learners showed similar receptive target-word vocabulary gain after both types of instruction, but learners who received form-focused vocabulary instruction showed more gain in semantic knowledge of target vocabulary, phonological awareness, and letter knowledge than learners who received meaning-focused vocabulary instruction. Level of ability to learn pairs of words with minimal acoustic-phonetic differences predicted gain in semantic knowledge of target vocabulary and in letter knowledge in the form-focused instruction group only. Practice or Policy: A focus on the form of words during instruction appears to have benefits for young children learning vocabulary.
  • Janssen, R., Moisik, S. R., & Dediu, D. (2018). Modelling human hard palate shape with Bézier curves. PLoS One, 13(2): e0191557. doi:10.1371/journal.pone.0191557.

    Abstract

    People vary at most levels, from the molecular to the cognitive, and the shape of the hard palate (the bony roof of the mouth) is no exception. The patterns of variation in the hard palate are important for the forensic sciences and (palaeo)anthropology, and might also play a role in speech production, both in pathological cases and normal variation. Here we describe a method based on Bézier curves, whose main aim is to generate possible shapes of the hard palate in humans for use in computer simulations of speech production and language evolution. Moreover, our method can also capture existing patterns of variation using few and easy-to-interpret parameters, and fits actual data obtained from MRI traces very well with as little as two or three free parameters. When compared to the widely-used Principal Component Analysis (PCA), our method fits actual data slightly worse for the same number of degrees of freedom. However, it is much better at generating new shapes without requiring a calibration sample, its parameters have clearer interpretations, and their ranges are grounded in geometrical considerations. © 2018 Janssen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
  • Janssen, R., Moisik, S. R., & Dediu, D. (2019). The effects of larynx height on vowel production are mitigated by the active control of articulators. Journal of Phonetics, 74, 1-17. doi:10.1016/j.wocn.2019.02.002.

    Abstract

    The influence of larynx position on vowel articulation is an important topic in understanding speech production, the present-day distribution of linguistic diversity and the evolution of speech and language in our lineage. We introduce here a realistic computer model of the vocal tract, constructed from actual human MRI data, which can learn, using machine learning techniques, to control the articulators in such a way as to produce speech sounds matching as closely as possible to a given set of target vowels. We systematically control the vertical position of the larynx and we quantify the differences between the target and produced vowels for each such position across multiple replications. We report that, indeed, larynx height does affect the accuracy of reproducing the target vowels and the distinctness of the produced vowel system, that there is a “sweet spot” of larynx positions that are optimal for vowel production, but that nevertheless, even extreme larynx positions do not result in a collapsed or heavily distorted vowel space that would make speech unintelligible. Together with other lines of evidence, our results support the view that the vowel space of human languages is influenced by our larynx position, but that other positions of the larynx may also be fully compatible with speech.

    Additional information

    Research Data via Github
  • Janssens, S. E. W., Sack, A. T., Ten Oever, S., & Graaf, T. A. (2022). Calibrating rhythmic stimulation parameters to individual electroencephalography markers: The consistency of individual alpha frequency in practical lab settings. European Journal of Neuroscience, 55(11/12), 3418-3437. doi:10.1111/ejn.15418.

    Abstract

    Rhythmic stimulation can be applied to modulate neuronal oscillations. Such ‘entrainment’ is optimized when stimulation frequency is individually calibrated based on magneto/encephalography markers. It remains unknown how consistent such individual markers are across days/sessions, within a session, or across cognitive states, hemispheres and estimation methods, especially in a realistic, practical, lab setting. We here estimated individual alpha frequency (IAF) repeatedly from short electroencephalography (EEG) measurements at rest or during an attention task (cognitive state), using single parieto-occipital electrodes in 24 participants on 4 days (between-sessions), with multiple measurements over an hour on 1 day (within-session). First, we introduce an algorithm to automatically reject power spectra without a sufficiently clear peak to ensure unbiased IAF estimations. Then we estimated IAF via the traditional ‘maximum’ method and a ‘Gaussian fit’ method. IAF was reliable within- and between-sessions for both cognitive states and hemispheres, though task-IAF estimates tended to be more variable. Overall, the ‘Gaussian fit’ method was more reliable than the ‘maximum’ method. Furthermore, we evaluated how far from an approximated ‘true’ task-related IAF the selected ‘stimulation frequency’ was, when calibrating this frequency based on a short rest-EEG, a short task-EEG, or simply selecting 10 Hz for all participants. For the ‘maximum’ method, rest-EEG calibration was best, followed by task-EEG, and then 10 Hz. For the ‘Gaussian fit’ method, rest-EEG and task-EEG-based calibration were similarly accurate, and better than 10 Hz. These results lead to concrete recommendations about valid, and automated, estimation of individual oscillation markers in experimental and clinical settings.
  • Janssens, S. E., Ten Oever, S., Sack, A. T., & de Graaf, T. A. (2022). “Broadband Alpha Transcranial Alternating Current Stimulation”: Exploring a new biologically calibrated brain stimulation protocol. NeuroImage, 253: 119109. doi:10.1016/j.neuroimage.2022.119109.

    Abstract

    Transcranial alternating current stimulation (tACS) can be used to study causal contributions of oscillatory brain mechanisms to cognition and behavior. For instance, individual alpha frequency (IAF) tACS was reported to enhance alpha power and impact visuospatial attention performance. Unfortunately, such results have been inconsistent and difficult to replicate. In tACS, stimulation generally involves one frequency, sometimes individually calibrated to a peak value observed in an M/EEG power spectrum. Yet, the ‘peak’ actually observed in such power spectra often contains a broader range of frequencies, raising the question whether a biologically calibrated tACS protocol containing this fuller range of alpha-band frequencies might be more effective. Here, we introduce ‘Broadband-alpha-tACS’, a complex individually calibrated electrical stimulation protocol. We band-pass filtered left posterior resting-state EEG data around the IAF (+/- 2 Hz), and converted that time series into an electrical waveform for tACS stimulation of that same left posterior parietal cortex location. In other words, we stimulated a brain region with a ‘replay’ of its own alpha-band frequency content, based on spontaneous activity. Within-subjects (N=24), we compared to a sham tACS session the effects of broadband-alpha tACS, power-matched spectral inverse (‘alpha-removed’) control tACS, and individual alpha frequency tACS, on EEG alpha power and performance in an endogenous attention task previously reported to be affected by alpha tACS. Broadband-alpha-tACS significantly modulated attention task performance (i.e., reduced the rightward visuospatial attention bias in trials without distractors, and reduced attention benefits). Alpha-removed tACS also reduced the rightward visuospatial attention bias. IAF-tACS did not significantly modulate attention task performance compared to sham tACS, but also did not statistically significantly differ from broadband-alpha-tACS. This new broadband-alpha tACS approach seems promising, but should be further explored and validated in future studies.

    Additional information

    supplementary materials
  • Janzen, G., & Van Turennout, M. (2004). Selective neural representation of objects relevant for navigation. Nature Neuroscience, 7(6), 673-677. doi:10.1038/nn1257.

    Abstract

    As people find their way through their environment, objects at navigationally relevant locations can serve as crucial landmarks. The parahippocampal gyrus has previously been shown to be involved in object and scene recognition. In the present study, we investigated the neural representation of navigationally relevant locations. Healthy human adults viewed a route through a virtual museum with objects placed at intersections (decision points) or at simple turns (non-decision points). Event-related functional magnetic resonance imaging (fMRI) data were acquired during subsequent recognition of the objects in isolation. Neural activity in the parahippocampal gyrus reflected the navigational relevance of an object's location in the museum. Parahippocampal responses were selectively increased for objects that occurred at decision points, independent of attentional demands. This increase occurred for forgotten as well as remembered objects, showing implicit retrieval of navigational information. The automatic storage of relevant object location in the parahippocampal gyrus provides a part of the neural mechanism underlying successful navigation.
  • Jara-Ettinger, J., & Rubio-Fernández, P. (2022). The social basis of referential communication: Speakers construct physical reference based on listeners’ expected visual search. Psychological Review, 129, 1394-1413. doi:10.1037/rev0000345.

    Abstract

    A foundational assumption of human communication is that speakers should say as much as necessary, but no more. Yet, people routinely produce redundant adjectives and their propensity to do so varies cross-linguistically. Here, we propose a computational theory, whereby speakers create referential expressions designed to facilitate listeners’ reference resolution, as they process words in real time. We present a computational model of our account, the Incremental Collaborative Efficiency (ICE) model, which generates referential expressions by considering listeners’ real-time incremental processing and reference identification. We apply the ICE framework to physical reference, showing that listeners construct expressions designed to minimize listeners’ expected visual search effort during online language processing. Our model captures a number of known effects in the literature, including cross-linguistic differences in speakers’ propensity to over-specify. Moreover, the ICE model predicts graded acceptability judgments with quantitative accuracy, systematically outperforming an alternative, brevity-based model. Our findings suggest that physical reference production is best understood as driven by a collaborative goal to help the listener identify the intended referent, rather than by an egocentric effort to minimize utterance length.
  • Jessop, A., & Chang, F. (2022). Thematic role tracking difficulties across multiple visual events influences role use in language production. Visual Cognition, 30(3), 151-173. doi:10.1080/13506285.2021.2013374.

    Abstract

    Language sometimes requires tracking the same participant in different thematic roles across multiple visual events (e.g., The girl that another girl pushed chased a third girl). To better understand how vision and language interact in role tracking, participants described videos of multiple randomly moving circles where two push events were presented. A circle might have the same role in both push events (e.g., agent) or different roles (e.g., agent of one push and patient of other push). The first three studies found higher production accuracy for the same role conditions compared to the different role conditions across different linguistic structure manipulations. The last three studies compared a featural account, where role information was associated with particular circles, or a relational account, where role information was encoded with particular push events. These studies found no interference between different roles, contrary to the predictions of the featural account. The foil was manipulated in these studies to increase the saliency of the second push and it was found that this changed the accuracy in describing the first push. The results suggest that language-related thematic role processing uses a relational representation that can encode multiple events.

    Additional information

    https://doi.org/10.17605/OSF.IO/PKXZH
  • Johnson, E. K., Bruggeman, L., & Cutler, A. (2018). Abstraction and the (misnamed) language familiarity effect. Cognitive Science, 42, 633-645. doi:10.1111/cogs.12520.

    Abstract

    Talkers are recognized more accurately if they are speaking the listeners’ native language rather than an unfamiliar language. This “language familiarity effect” has been shown not to depend upon comprehension and must instead involve language sound patterns. We further examine the level of sound-pattern processing involved, by comparing talker recognition in foreign languages versus two varieties of English, by (a) English speakers of one variety, (b) English speakers of the other variety, and (c) non-native listeners (more familiar with one of the varieties). All listener groups performed better with native than foreign speech, but no effect of language variety appeared: Native listeners discriminated talkers equally well in each, with the native variety never outdoing the other variety, and non-native listeners discriminated talkers equally poorly in each, irrespective of the variety's familiarity. The results suggest that this talker recognition effect rests not on simple familiarity, but on an abstract level of phonological processing
  • Jordens, P. (2004). Systematiek en dynamiek bij de verwerving van Finietheid. Toegepaste Taalwetenschap in Artikelen, 71, 9-22.

    Abstract

    In early Dutch learner varieties, there is no evidence of finiteness being a functional category. There is no V2nd: no correlation between inflectional morphology and movement. Initially, learners express the illocutive function of finiteness through the use of illocutive markers, with the non-use of an illocutive marker expressing the default illocutive function of assertion. Illocutive markers are functioning as adjuncts with scope over the predicate. Illocutive markers become re-analysed as functional elements.The driving force is the acquisition of the auxiliary verbs that occur with past participles. It leads to a reanalysis of illocutive markers as two separate elements: an auxiliary verb and a scope adverb. The (modal) auxiliary carries illocutive function. Lexical verb-argument structure (including the external argument) occurs within the domain of the auxiliary verb. The predicate as the focus constituent occurs within the domain of a scope adverb. This reanalysis establishes a position for the external argument within the domain of AUX. The acquisition of AUX causes the acquisition of a (hierarchical) structure with a complement as a constituent which represents an underlying verb-argument structure, a predicate as the domain of elements that are in focus, and an external (specifier) position as a landing site for elements with topic function.
  • Kakimoto, N., Shimamoto, H., Kitisubkanchana, J., Tsujimoto, T., Senda, Y., Iwamoto, Y., Verdonschot, R. G., Hasegawa, Y., & Murakami, S. (2019). T2 relaxation times of the retrodiscal tissue in patients with temporomandibular joint disorders and in healthy volunteers: A comparative study. Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology, 128(3), 311-318. doi:10.1016/j.oooo.2019.02.005.

    Abstract

    Objective. The aims of this study were to compare the temporomandibular joint (TMJ) retrodiscal tissue T2 relaxation times between patients with temporomandibular disorders (TMDs) and asymptomatic volunteers and to assess the diagnostic potential of this approach.
    Study Design. Patients with TMD (n = 173) and asymptomatic volunteers (n = 17) were examined by using a 1.5-T magnetic resonance scanner. The imaging protocol consisted of oblique sagittal, T2-weighted, 8-echo fast spin echo sequences in the closed mouth position. Retrodiscal tissue T2 relaxation times were obtained. Additionally, disc location and reduction, disc configuration, joint effusion, osteoarthritis, and bone edema or osteonecrosis were classified using MRI scans. The T2 relaxation times of each group were statistically compared.
    Results. Retrodiscal tissue T2 relaxation times were significantly longer in patient groups than in asymptomatic volunteers (P < .01). T2 relaxation times were significantly longer in all of the morphologic categories. The most important variables affecting retrodiscal tissue T2 relaxation times were disc configuration, joint effusion, and osteoarthritis.
    Conclusion. Retrodiscal tissue T2 relaxation times of patients with TMD were significantly longer than those of healthy volunteers. This finding may lead to the development of a diagnostic marker to aid in the early detection of TMDs.
  • Kalashnikova, M., Escudero, P., & Kidd, E. (2018). The development of fast-mapping and novel word retention strategies in monolingual and bilingual infants. Developmental Science, 21(6): e12674. doi:10.1111/desc.12674.

    Abstract

    The mutual exclusivity (ME) assumption is proposed to facilitate early word learning by guiding infants to map novel words to novel referents. This study assessed the emergence and use of ME to both disambiguate and retain the meanings of novel words across development in 18‐month‐old monolingual and bilingual children (Experiment 1; N = 58), and in a sub‐group of these children again at 24 months of age (Experiment 2: N = 32). Both monolinguals and bilinguals employed ME to select the referent of a novel label to a similar extent at 18 and 24 months. At 18 months, there were also no differences in novel word retention between the two language‐background groups. However, at 24 months, only monolinguals showed the ability to retain these label–object mappings. These findings indicate that the development of the ME assumption as a reliable word‐learning strategy is shaped by children's individual language exposure and experience with language use.

    Files private

    Request files
  • Kamermans, K. L., Pouw, W., Mast, F. W., & Paas, F. (2019). Reinterpretation in visual imagery is possible without visual cues: A validation of previous research. Psychological Research, 83(6), 1237-1250. doi:10.1007/s00426-017-0956-5.

    Abstract

    Is visual reinterpretation of bistable figures (e.g., duck/rabbit figure) in visual imagery possible? Current consensus suggests that it is in principle possible because of converging evidence of quasi-pictorial functioning of visual imagery. Yet, studies that have directly tested and found evidence for reinterpretation in visual imagery, allow for the possibility that reinterpretation was already achieved during memorization of the figure(s). One study resolved this issue, providing evidence for reinterpretation in visual imagery (Mast and Kosslyn, Cognition 86:57-70, 2002). However, participants in that study performed reinterpretations with aid of visual cues. Hence, reinterpretation was not performed with mental imagery alone. Therefore, in this study we assessed the possibility of reinterpretation without visual support. We further explored the possible role of haptic cues to assess the multimodal nature of mental imagery. Fifty-three participants were consecutively presented three to be remembered bistable 2-D figures (reinterpretable when rotated 180 degrees), two of which were visually inspected and one was explored hapticly. After memorization of the figures, a visually bistable exemplar figure was presented to ensure understanding of the concept of visual bistability. During recall, 11 participants (out of 36; 30.6%) who did not spot bistability during memorization successfully performed reinterpretations when instructed to mentally rotate their visual image, but additional haptic cues during mental imagery did not inflate reinterpretation ability. This study validates previous findings that reinterpretation in visual imagery is possible.
  • Kamermans, K. L., Pouw, W., Fassi, L., Aslanidou, A., Paas, F., & Hostetter, A. B. (2019). The role of gesture as simulated action in reinterpretation of mental imagery. Acta Psychologica, 197, 131-142. doi:10.1016/j.actpsy.2019.05.004.

    Abstract

    In two experiments, we examined the role of gesture in reinterpreting a mental image. In Experiment 1, we found that participants gestured more about a figure they had learned through manual exploration than about a figure they had learned through vision. This supports claims that gestures emerge from the activation of perception-relevant actions during mental imagery. In Experiment 2, we investigated whether such gestures have a causal role in affecting the quality of mental imagery. Participants were randomly assigned to gesture, not gesture, or engage in a manual interference task as they attempted to reinterpret a figure they had learned through manual exploration. We found that manual interference significantly impaired participants' success on the task. Taken together, these results suggest that gestures reflect mental imaginings of interactions with a mental image and that these imaginings are critically important for mental manipulation and reinterpretation of that image. However, our results suggest that enacting the imagined movements in gesture is not critically important on this particular task.
  • Kanero, J., Geçkin, V., Oranç, C., Mamus, E., Küntay, A. C., & Göksun, T. (2018). Social robots for early language learning: Current evidence and future directions. Child Development Perspectives, 12(3), 146-151. doi:10.1111/cdep.12277.

    Abstract

    In this article, we review research on child–robot interaction (CRI) to discuss how social robots can be used to scaffold language learning in young children. First we provide reasons why robots can be useful for teaching first and second languages to children. Then we review studies on CRI that used robots to help children learn vocabulary and produce language. The studies vary in first and second languages and demographics of the learners (typically developing children and children with hearing and communication impairments). We conclude that, although social robots are useful for teaching language to children, evidence suggests that robots are not as effective as human teachers. However, this conclusion is not definitive because robots that tutor students in language have not been evaluated rigorously and technology is advancing rapidly. We suggest that CRI offers an opportunity for research and list possible directions for that work.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2022). Sign advantage: Both children and adults’ spatial expressions in sign are more informative than those in speech and gestures combined. Journal of Child Language. Advance online publication. doi:10.1017/S0305000922000642.

    Abstract

    Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children’s co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers’ spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development.
  • Karaminis, T., Hintz, F., & Scharenborg, O. (2022). The presence of background noise extends the competitor space in native and non-native spoken-word recognition: Insights from computational modeling. Cognitive Science, 46(2): e13110. doi:10.1111/cogs.13110.

    Abstract

    Oral communication often takes place in noisy environments, which challenge spoken-word recognition. Previous research has suggested that the presence of background noise extends the number of candidate words competing with the target word for recognition and that this extension affects the time course and accuracy of spoken-word recognition. In this study, we further investigated the temporal dynamics of competition processes in the presence of background noise, and how these vary in listeners with different language proficiency (i.e., native and non-native) using computational modeling. We developed ListenIN (Listen-In-Noise), a neural-network model based on an autoencoder architecture, which learns to map phonological forms onto meanings in two languages and simulates native and non-native spoken-word comprehension. Simulation A established that ListenIN captures the effects of noise on accuracy rates and the number of unique misperception errors of native and non-native listeners in an offline spoken-word identification task (Scharenborg et al., 2018). Simulation B showed that ListenIN captures the effects of noise in online task settings and accounts for looking preferences of native (Hintz & Scharenborg, 2016) and non-native (new data collected for this study) listeners in a visual-world paradigm. We also examined the model’s activation states during online spoken-word recognition. These analyses demonstrated that the presence of background noise increases the number of competitor words which are engaged in phonological competition and that this happens in similar ways intra- and interlinguistically and in native and non-native listening. Taken together, our results support accounts positing a ‘many-additional-competitors scenario’ for the effects of noise on spoken-word recognition.
  • Karsan, Ç., Özdemir, R. S., Bulut, T., & Hanoğlu, L. (2022). The effects of single-session cathodal and bihemispheric tDCS on fluency in stuttering. Journal of Neurolinguistics, 63(101064): 101064. doi:10.1016/j.jneuroling.2022.101064.

    Abstract

    Developmental stuttering is a fluency disorder that adversely affect many aspects of a person's life. Recent transcranial direct current stimulation (tDCS) studies have shown promise to improve fluency in people who stutter. To date, bihemispheric tDCS has not been investigated in this population. In the present study, we aimed to investigate the effects of single-session bihemispheric and unihemispheric cathodal tDCS on fluency in adults who stutter. We predicted that bihemispheric tDCS with anodal stimulation to the left IFG and cathodal stimulation to the right IFG would improve fluency better than the sham and cathodal tDCS to the right IFG. Seventeen adults who stutter completed this single-blind, crossover, sham-controlled tDCS experiment. All participants received 20 min of tDCS alongside metronome-timed speech during intervention sessions. Three tDCS interventions were administered: bihemispheric tDCS with anodal stimulation to the left IFG and cathodal stimulation to the right IFG, unihemispheric tDCS with cathodal stimulation to the right IFG, and sham stimulation. Speech fluency during reading and conversation was assessed before, immediately after, and one week after each intervention session. There was no significant fluency improvement in conversation for any tDCS interventions. Reading fluency improved following both bihemispheric and cathodal tDCS interventions. tDCS montages were not significantly different in their effects on fluency.

    Files private

    Request files
  • Kartushina, N., Mani, N., Aktan-Erciyes, A., Alaslani, K., Aldrich, N. J., Almohammadi, A., Alroqi, H., Anderson, L. M., Andonova, E., Aussems, S., Babineau, M., Barokova, M., Bergmann, C., Cashon, C., Custode, S., De Carvalho, A., Dimitrova, N., Dynak, A., Farah, R., Fennell, C. and 32 moreKartushina, N., Mani, N., Aktan-Erciyes, A., Alaslani, K., Aldrich, N. J., Almohammadi, A., Alroqi, H., Anderson, L. M., Andonova, E., Aussems, S., Babineau, M., Barokova, M., Bergmann, C., Cashon, C., Custode, S., De Carvalho, A., Dimitrova, N., Dynak, A., Farah, R., Fennell, C., Fiévet, A.-C., Frank, M. C., Gavrilova, M., Gendler-Shalev, H., Gibson, S. P., Golway, K., Gonzalez-Gomez, N., Haman, E., Hannon, E., Havron, N., Hay, J., Hendriks, C., Horowitz-Kraus, T., Kalashnikova, M., Kanero, J., Keller, C., Krajewski, G., Laing, C., Lundwall, R. A., Łuniewska, M., Mieszkowska, K., Munoz, L., Nave, K., Olesen, N., Perry, L., Rowland, C. F., Santos Oliveira, D., Shinskey, J., Veraksa, A., Vincent, K., Zivan, M., & Mayor, J. (2022). COVID-19 first lockdown as a window into language acquisition: Associations between caregiver-child activities and vocabulary gains. Language Development Research, 2, 1-36. doi:10.34842/abym-xv34.

    Abstract

    The COVID-19 pandemic, and the resulting closure of daycare centers worldwide, led to unprecedented changes in children’s learning environments. This period of increased time at home with caregivers, with limited access to external sources (e.g., daycares) provides a unique opportunity to examine the associations between the caregiver-child activities and children’s language development. The vocabularies of 1742 children aged8-36 months across 13 countries and 12 languages were evaluated at the beginning and end of the first lockdown period in their respective countries(from March to September 2020). Children who had less passive screen exposure and whose caregivers read more to them showed larger gains in vocabulary development during lockdown, after controlling for SES and other caregiver-child activities. Children also gained more words than expected (based on normative data) during lockdown; either caregivers were more aware of their child’s development or vocabulary development benefited from intense caregiver-child interaction during lockdown.
  • Kaufhold, S. P., & Van Leeuwen, E. J. C. (2019). Why intergroup variation matters for understanding behaviour. Biology Letters, 15(11): 20190695. doi:10.1098/rsbl.2019.0695.

    Abstract

    Intergroup variation (IGV) refers to variation between different groups of the same species. While its existence in the behavioural realm has been expected and evidenced, the potential effects of IGV are rarely considered in studies that aim to shed light on the evolutionary origins of human socio-cognition, especially in our closest living relatives—the great apes. Here, by taking chimpanzees as a point of reference, we argue that (i) IGV could plausibly explain inconsistent research findings across numerous topics of inquiry (experimental/behavioural studies on chimpanzees), (ii) understanding the evolutionary origins of behaviour requires an accurate assessment of species' modes of behaving across different socio-ecological contexts, which necessitates a reliable estimation of variation across intraspecific groups, and (iii) IGV in the behavioural realm is increasingly likely to be expected owing to the progressive identification of non-human animal cultures. With these points, and by extrapolating from chimpanzees to generic guidelines, we aim to encourage researchers to explicitly consider IGV as an explanatory variable in future studies attempting to understand the socio-cognitive and evolutionary determinants of behaviour in group-living animals.
  • Kemmerer, S. K., Sack, A. T., de Graaf, T. A., Ten Oever, S., De Weerd, P., & Schuhmann, T. (2022). Frequency-specific transcranial neuromodulation of alpha power alters visuospatial attention performance. Brain Research, 1782: 147834. doi:10.1016/j.brainres.2022.147834.

    Abstract

    Transcranial alternating current stimulation (tACS) at 10 Hz has been shown to modulate spatial attention. However, the frequency-specificity and the oscillatory changes underlying this tACS effect are still largely unclear. Here, we applied high-definition tACS at individual alpha frequency (IAF), two control frequencies (IAF+/-2Hz) and sham to the left posterior parietal cortex and measured its effects on visuospatial attention performance and offline alpha power (using electroencephalography, EEG). We revealed a behavioural and electrophysiological stimulation effect relative to sham for IAF but not control frequency stimulation conditions: there was a leftward lateralization of alpha power for IAF tACS, which differed from sham for the first out of three minutes following tACS. At a high value of this EEG effect (moderation effect), we observed a leftward attention bias relative to sham. This effect was task-specific, i.e., it could be found in an endogenous attention but not in a detection task. Only in the IAF tACS condition, we also found a correlation between the magnitude of the alpha lateralization and the attentional bias effect. Our results support a functional role of alpha oscillations in visuospatial attention and the potential of tACS to modulate it. The frequency-specificity of the effects suggests that an individualization of the stimulation frequency is necessary in heterogeneous target groups with a large variation in IAF.

    Additional information

    supplementary data
  • Kemmerer, S. K., De Graaf, T. A., Ten Oever, S., Erkens, M., De Weerd, P., & Sack, A. T. (2022). Parietal but not temporoparietal alpha-tACS modulates endogenous visuospatial attention. Cortex, 154, 149-166. doi:10.1016/j.cortex.2022.01.021.

    Abstract

    Visuospatial attention can either be voluntarily directed (endogenous/top-down attention) or automatically triggered (exogenous/bottom-up attention). Recent research showed that dorsal parietal transcranial alternating current stimulation (tACS) at alpha frequency modulates the spatial attentional bias in an endogenous but not in an exogenous visuospatial attention task. Yet, the reason for this task-specificity remains unexplored. Here, we tested whether this dissociation relates to the proposed differential role of the dorsal attention network (DAN) and ventral attention network (VAN) in endogenous and exogenous attention processes respectively. To that aim, we targeted the left and right dorsal parietal node of the DAN, as well as the left and right ventral temporoparietal node of the VAN using tACS at the individual alpha frequency. Every participant completed all four stimulation conditions and a sham condition in five separate sessions. During tACS, we assessed the behavioral visuospatial attention bias via an endogenous and exogenous visuospatial attention task. Additionally, we measured offline alpha power immediately before and after tACS using electroencephalography (EEG). The behavioral data revealed an effect of tACS on the endogenous but not exogenous attention bias, with a greater leftward bias during (sham-corrected) left than right hemispheric stimulation. In line with our hypothesis, this effect was brain area-specific, i.e., present for dorsal parietal but not ventral temporoparietal tACS. However, contrary to our expectations, there was no effect of ventral temporoparietal tACS on the exogenous visuospatial attention bias. Hence, no double dissociation between the two targeted attention networks. There was no effect of either tACS condition on offline alpha power. Our behavioral data reveal that dorsal parietal but not ventral temporoparietal alpha oscillations steer endogenous visuospatial attention. This brain-area specific tACS effect matches the previously proposed dissociation between the DAN and VAN and, by showing that the spatial attention bias effect does not generalize to any lateral posterior tACS montage, renders lateral cutaneous and retinal effects for the spatial attention bias in the dorsal parietal condition unlikely. Yet the absence of tACS effects on the exogenous attention task suggests that ventral temporoparietal alpha oscillations are not functionally relevant for exogenous visuospatial attention. We discuss the potential implications of this finding in the context of an emerging theory on the role of the ventral temporoparietal node.

    Additional information

    supplementary material
  • Kempen, G., & Harbusch, K. (2018). A competitive mechanism selecting verb-second versus verb-final word order in causative and argumentative clauses of spoken Dutch: A corpus-linguistic study. Language Sciences, 69, 30-42. doi:10.1016/j.langsci.2018.05.005.

    Abstract

    In Dutch and German, the canonical order of subject, object(s) and finite verb is ‘verb-second’ (V2) in main but ‘verb-final’ (VF) in subordinate clauses. This occasionally leads to the production of noncanonical word orders. Familiar examples are causative and argumentative clauses introduced by a subordinating conjunction (Du. omdat, Ger. weil ‘because’): the omdat/weil-V2 phenomenon. Such clauses may also be introduced by coordinating conjunctions (Du. want, Ger. denn), which license V2 exclusively. However, want/denn-VF structures are unknown. We present the results of a corpus study on the incidence of omdat-V2 in spoken Dutch, and compare them to published data on weil-V2 in spoken German. Basic findings: omdat-V2 is much less frequent than weil-V2 (ratio almost 1:8); and the frequency relations between coordinating and subordinating conjunctions are opposite (want >> omdat; denn << weil). We propose that conjunction selection and V2/VF selection proceed partly independently, and sometimes miscommunicate—e.g. yielding omdat/weil paired with V2. Want/denn-VF pairs do not occur because want/denn clauses are planned as autonomous sentences, which take V2 by default. We sketch a simple feedforward neural network with two layers of nodes (representing conjunctions and word orders, respectively) that can simulate the observed data pattern through inhibition-based competition of the alternative choices within the node layers.
  • Kempen, G. (1991). Conjunction reduction and gapping in clause-level coordination: An inheritance-based approach. Computational Intelligence, 7, 357-360. doi:10.1111/j.1467-8640.1991.tb00406.x.
  • Kempen, G. (1995). De mythe van het woordbeeld: Spellingherziening taalpsychologisch doorgelicht. Onze Taal, 64(11), 275-277.
  • Kempen, G. (1995). Drinken eten mij Nim. Intermediair, 31(19), 41-45.
  • Kempen, G. (1995). 'Hier spreekt men Nederlands'. EMNET: Nieuwsbrief Elektronische Media, 22, 1.
  • Kempen, G. (1973). [Review of the book Psycholinguïstiek by B. Tervoort et al.]. Nederlands Tijdschrift voor de Psychologie, 28, 172-174.
  • Kempen, G. (1995). IJ of Y? Onze Taal, 64(9), 205-206.
  • Kempen, G., & Harbusch, K. (2019). Mutual attraction between high-frequency verbs and clause types with finite verbs in early positions: Corpus evidence from spoken English, Dutch, and German. Language, Cognition and Neuroscience, 34(9), 1140-1151. doi:10.1080/23273798.2019.1642498.

    Abstract

    We report a hitherto unknown statistical relationship between the corpus frequency of finite verbs and their fixed linear positions (early vs. late) in finite clauses of English, Dutch, and German. Compared to the overall frequency distribution of verb lemmas in the corpora, high-frequency finite verbs are overused in main clauses, at the expense of nonfinite verbs. This finite versus nonfinite split of high-frequency verbs is basically absent from subordinate clauses. Furthermore, this “main-clause bias” (MCB) of high-frequency verbs is more prominent in German and Dutch (SOV languages) than in English (an SVO language). We attribute the MCB and its varying effect sizes to faster accessibility of high-frequency finite verbs, which (1) increases the probability for these verbs to land in clauses mandating early verb placement, and (2) boosts the activation of clause plans that assign verbs to early linear positions (in casu: clauses with SVO as opposed to SOV order).

    Additional information

    plcp_a_1642498_sm1530.pdf
  • Kempen, G. (1995). Processing discontinuous lexical items: A reply to Frazier. Cognition, 55, 219-221. doi:10.1016/0010-0277(94)00657-7.

    Abstract

    Comments on a study by Frazier and others on Dutch-language lexical processing. Claims that the control condition in the experiment was inadequate and that an assumption made by Frazier about closed class verbal items is inaccurate, and proposes an alternative account of a subset of the data from the experiment
  • Kempen, G. (1995). Processing separable complex verbs in Dutch: Comments on Frazier, Flores d'Arcais, and Coolen (1993). Cognition, 54, 353-356. doi:10.1016/0010-0277(94)00649-6.

    Abstract

    Raises objections to L. Frazier et al's (see record 1994-32229-001) report of an experimental study intended to test Schreuder's (1990) Morphological Integration (MI) model concerning the processing of separable and inseparable verbs and shows that the logic of the experiment is flawed. The problem is rooted in the notion of a separable complex verb. The conclusion is drawn that Frazier et al's experimental data cannot be taken as evidence for the theoretical propositions they develop about the MI model.
  • Kempen, G. (1988). Preface. Acta Psychologica, 69(3), 205-206. doi:10.1016/0001-6918(88)90032-7.
  • Kempen, G. (1995). Van leescultuur en beeldcultuur naar internetcultuur. De Psycholoog, 30, 315-319.
  • Kemps, R. J. J. K., Ernestus, M., Schreuder, R., & Baayen, R. H. (2004). Processing reduced word forms: The suffix restoration effect. Brain and Language, 90(1-3), 117-127. doi:10.1016/S0093-934X(03)00425-5.

    Abstract

    Listeners cannot recognize highly reduced word forms in isolation, but they can do so when these forms are presented in context (Ernestus, Baayen, & Schreuder, 2002). This suggests that not all possible surface forms of words have equal status in the mental lexicon. The present study shows that the reduced forms are linked to the canonical representations in the mental lexicon, and that these latter representations induce reconstruction processes. Listeners restore suffixes that are partly or completely missing in reduced word forms. A series of phoneme-monitoring experiments reveals the nature of this restoration: the basis for suffix restoration is mainly phonological in nature, but orthography has an influence as well.
  • Wu, Q., Kidd, E., & Goodhew, S. C. (2019). The spatial mapping of concepts in English and Mandarin. Journal of Cognitive Psychology, 31(7), 703-724. doi:10.1080/20445911.2019.1663354.

    Abstract

    English speakers have been shown to map abstract concepts in space, which occurs on both the vertical and horizontal dimensions. For example, words such as God are associated with up and right spatial locations, and words such as Satan with down and left. If the tendency to map concepts in space is a universal property of human cognition, then it is likely that such mappings may be at least partly culturally-specific, since many concepts are themselves language-specific and therefore cultural conventions. Here we investigated whether Mandarin speakers report spatial mapping of concepts, and how these mappings compare with English speakers (i.e. are words with the same meaning associated with the same spatial locations). Across two studies, results showed that both native English and Mandarin speakers reported spatial mapping of concepts, and that the distribution of mappings was highly similar for the two groups. Theoretical implications are discussed.
  • Kidd, E., & Garcia, R. (2022). How diverse is child language acquisition research? First Language, 42(6), 703-735. doi:10.1177/01427237211066405.

    Abstract

    A comprehensive theory of child language acquisition requires an evidential base that is representative of the typological diversity present in the world’s 7000 or so languages. However, languages are dying at an alarming rate, and the next 50 years represents the last chance we have to document acquisition in many of them. Here, we take stock of the last 45 years of research published in the four main child language acquisition journals: Journal of Child Language, First Language, Language Acquisition and Language Learning and Development. We coded each article for several variables, including (1) participant group (mono vs multilingual), (2) language(s), (3) topic(s) and (4) country of author affiliation, from each journal’s inception until the end of 2020. We found that we have at least one article published on around 103 languages, representing approximately 1.5% of the world’s languages. The distribution of articles was highly skewed towards English and other well-studied Indo-European languages, with the majority of non-Indo-European languages having just one paper. A majority of the papers focused on studies of monolingual children, although papers did not always explicitly report participant group status. The distribution of topics across language categories was more even. The number of articles published on non-Indo-European languages from countries outside of North America and Europe is increasing; however, this increase is driven by research conducted in relatively wealthy countries. Overall, the vast majority of the research was produced in the Global North. We conclude that, despite a proud history of crosslinguistic research, the goals of the discipline need to be recalibrated before we can lay claim to truly a representative account of child language acquisition.

    Additional information

    Read author's response to comments
  • Kidd, E., & Garcia, R. (2022). Where to from here? Increasing language coverage while building a more diverse discipline. First Language, 42(6), 837-851. doi:10.1177/01427237221121190.

    Abstract

    Our original target article highlighted some significant shortcomings in the current state of child language research: a large skew in our evidential base towards English and a handful of other Indo-European languages that partly has its origins in a lack of researcher diversity. In this article, we respond to the 21 commentaries on our original article. The commentaries highlighted both the importance of attention to typological features of languages and the environments and contexts in which languages are acquired, with many commentators providing concrete suggestions on how we address the data skew. In this response, we synthesise the main themes of the commentaries and make suggestions for how the field can move towards both improving data coverage and opening up to traditionally under-represented researchers.

    Additional information

    Link to original target article
  • Kidd, E., Junge, C., Spokes, T., Morrison, L., & Cutler, A. (2018). Individual differences in infant speech segmentation: Achieving the lexical shift. Infancy, 23(6), 770-794. doi:10.1111/infa.12256.

    Abstract

    We report a large‐scale electrophysiological study of infant speech segmentation, in which over 100 English‐acquiring 9‐month‐olds were exposed to unfamiliar bisyllabic words embedded in sentences (e.g., He saw a wild eagle up there), after which their brain responses to either the just‐familiarized word (eagle) or a control word (coral) were recorded. When initial exposure occurs in continuous speech, as here, past studies have reported that even somewhat older infants do not reliably recognize target words, but that successful segmentation varies across children. Here, we both confirm and further uncover the nature of this variation. The segmentation response systematically varied across individuals and was related to their vocabulary development. About one‐third of the group showed a left‐frontally located relative negativity in response to familiar versus control targets, which has previously been described as a mature response. Another third showed a similarly located positive‐going reaction (a previously described immature response), and the remaining third formed an intermediate grouping that was primarily characterized by an initial response delay. A fine‐grained group‐level analysis suggested that a developmental shift to a lexical mode of processing occurs toward the end of the first year, with variation across individual infants in the exact timing of this shift.

    Additional information

    supporting information
  • Kidd, E., Donnelly, S., & Christiansen, M. H. (2018). Individual differences in language acquisition and processing. Trends in Cognitive Sciences, 22(2), 154-169. doi:10.1016/j.tics.2017.11.006.

    Abstract

    Humans differ in innumerable ways, with considerable variation observable at every level of description, from the molecular to the social. Traditionally, linguistic and psycholinguistic theory has downplayed the possibility of meaningful differences in language across individuals. However, it is becoming increasingly evident that there is
    significant variation among speakers at any age as well as across the lifespan. In this paper, we review recent research in psycholinguistics, and argue that a focus on individual differences provides a crucial source of evidence that bears strongly upon core issues in theories of the acquisition and processing of language; specifically, the role of experience in language acquisition, processing, and attainment, and the architecture of the language faculty.
  • Kidd, E. (2004). Grammars, parsers, and language acquisition. Journal of Child Language, 31(2), 480-483. doi:10.1017/S0305000904006117.

    Abstract

    Drozd's critique of Crain & Thornton's (C&T) (1998) book Investigations in Universal Grammar (IUG) raises many issues concerning theory and experimental design within generative approaches to language acquisition. I focus here on one of the strongest theoretical claims of the Modularity Matching Model (MMM): continuity of processing. For reasons different to Drozd, I argue that the assumption is tenuous. Furthermore, I argue that the focus of the MMM and the methodological prescriptions contained in IUG are too narrow to capture language acquisition.
  • Kim, N., Brehm, L., & Yoshida, M. (2019). The online processing of noun phrase ellipsis and mechanisms of antecedent retrieval. Language, Cognition and Neuroscience, 34(2), 190-213. doi:10.1080/23273798.2018.1513542.

    Abstract

    We investigate whether grammatical information is accessed in processing noun phrase ellipsis (NPE) and other anaphoric constructions. The first experiment used an agreement attraction paradigm to reveal that ungrammatical plural verbs following NPE with an antecedent containing a plural modifier (e.g. Derek’s key to the boxes … and Mary’s_ probably *are safe in the drawer) show similar facilitation to non-elided NPs. The second experiment used the same paradigm to examine a coordination construction without anaphoric elements, and the third examined anaphoric one. Agreement attraction was not observed in either experiment, suggesting that processing NPE is different from processing non-anaphoric coordination constructions or anaphoric one. Taken together, the results indicate that the parser is sensitive to grammatical distinctions at the ellipsis site where it prioritises and retrieves the head at the initial stage of processing and retrieves the local noun within the modifier phrase only when it is necessary in parsing NPE.

    Additional information

    Kim_Brehm_Yoshida_2018sup.pdf
  • Kinoshita, S., Schubert, T., & Verdonschot, R. G. (2019). Allograph priming is based on abstract letter identities: Evidence from Japanese kana. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(1), 183-190. doi:10.1037/xlm0000563.

    Abstract

    It is well-established that allographs like the uppercase and lowercase forms of the Roman alphabet (e.g., a and A) map onto the same "abstract letter identity," orthographic representations that are independent of the visual form. Consistent with this, in the allograph match task ("Are 'a' and 'A' the same letter?"), priming by a masked letter prime is equally robust for visually dissimilar prime-target pairs (e.g., d and D) and similar pairs (e.g., c and C). However, in principle this pattern of priming is also consistent with the possibility that allograph priming is purely phonological, based on the letter name. Because different allographic forms of the same letter, by definition, share a letter name, it is impossible to rule out this possibility a priori. In the present study, we investigated the influence of shared letter names by taking advantage of the fact that Japanese is written in two distinct writing systems, syllabic kana-that has two parallel forms, hiragana and katakana-and logographic kanji. Using the allograph match task, we tested whether a kanji prime with the same pronunciation as the target kana (e.g., both pronounced /i/) produces the same amount of priming as a kana prime in the opposite kana form (e.g.,). We found that the kana primes produced substantially greater priming than the phonologically identical kanji prime. which we take as evidence that allograph priming is based on abstract kana identity, not purely phonology.
  • Kinoshita, S., & Verdonschot, R. G. (2019). On recognizing Japanese katakana words: Explaining the reduced priming with hiragana and mixed-kana identity primes. Journal of Experimental Psychology: Human Perception and Performance, 45(11), 1513-1521. doi:10.1037/xhp0000692.

    Abstract

    The Japanese kana syllabary has 2 allographic forms, hiragana and katakana. As with other allographic variants like the uppercase and lowercase letters of the Roman alphabet, they show robust formindependent priming effects in the allograph match task (e.g., Kinoshita. Schubert. & Verdonschot, 2019). suggesting that they share abstract character-level representations. In direct contradiction, Perea. Nakayama, and Lupker (2017) argued that hiragana and katakana do not share character-level representations. based on their finding of reduced priming with identity prime containing a mix of hiragana and katakana (the mixed-kana prime) relative to the all-katakana identity prime in a lexical-decision task with loanword targets written in katakana. Here we sought to reconcile these seemingly contradictory claims, using mixed-kana. hiragana, and katakana primes in lexical decision. The mixed-kana prime and hiragana prime produced priming effects that are indistinguishable, and both were reduced in size relative to the priming effect produced by the katakana identity prime. Furthermore, this pattern was unchanged when the target was presented in hiragana. The findings are interpreted in terms of the assumption that the katakana format is specified in the orthographic representation of loanwords in Japanese readers. Implications of the account for the universality across writing systems is discussed.
  • Kircher, T. T. J., Brammer, M. J., Levelt, W. J. M., Bartels, M., & McGuire, P. K. (2004). Pausing for thought: Engagement of left temporal cortex during pauses in speech. NeuroImage, 21(1), 84-90. doi:10.1016/j.neuroimage.2003.09.041.

    Abstract

    Pauses during continuous speech, particularly those that occur within clauses, are thought to reflect the planning of forthcoming verbal output. We used functional Magnetic Resonance Imaging (fMRI) to examine their neural correlates. Six volunteers were scanned while describing seven Rorschach inkblots, producing 3 min of speech per inkblot. In an event-related design, the level of blood oxygenation level dependent (BOLD) contrast during brief speech pauses (mean duration 1.3 s, SD 0.3 s) during overt speech was contrasted with that during intervening periods of articulation. We then examined activity associated with pauses that occurred within clauses and pauses that occurred between grammatical junctions. Relative to articulation during speech, pauses were associated with activation in the banks of the left superior temporal sulcus (BA 39/22), at the temporoparietal junction. Continuous speech was associated with greater activation bilaterally in the inferior frontal (BA 44/45), middle frontal (BA 8) and anterior cingulate (BA 24) gyri, the middle temporal sulcus (BA 21/22), the occipital cortex and the cerebellum. Left temporal activation was evident during pauses that occurred within clauses but not during pauses at grammatical junctions. In summary, articulation during continuous speech involved frontal, temporal and cerebellar areas, while pausing was associated with activity in the left temporal cortex, especially when this occurred within a clause. The latter finding is consistent with evidence that within-clause pauses are a correlate of speech planning and in particular lexical retrieval.
  • Kirk, E., Donnelly, S., Furman, R., Warmington, M., Glanville, J., & Eggleston, A. (2022). The relationship between infant pointing and language development: A meta-analytic review. Developmental Review, 64: 101023. doi:10.1016/j.dr.2022.101023.

    Abstract

    Infant pointing has long been identified as an important precursor and predictor of language development. Infants typically begin to produce index finger pointing around the time of their first birthday and previous research has shown that both the onset and the frequency of pointing can predict aspects of productive and receptive language. The current study used a multivariate meta-analytic approach to estimate the strength of the relationship between infant pointing and language. We identified 30 papers published between 1984 and 2019 that met our stringent inclusion criteria, and 25 studies (comprising 77 effect sizes) with samples ≥10 were analysed. Methodological quality of the studies was assessed to identify potential sources of bias. We found a significant but small overall effect size of r = 0.20. Our findings indicate that the unique contribution of pointing to language development may be less robust than has been previously understood, however our stringent inclusion criteria (as well as our publication bias corrections), means that our data represent a more conservative estimate of the relationship between pointing and language. Moderator analysis showed significant group differences in favour of effect sizes related to language comprehension, non-vocabulary measures of language, pointing assessed after 18 months of age and pointing measured independent of speech. A significant strength of this study is the use of multivariate meta-analysis, which allowed us to utilise all available data to provide a more accurate estimate. We consider the findings in the context of the existing research and discuss the general limitations in this field, including the lack of cultural diversity.

    Additional information

    supplementary data
  • Kiyama, S., Verdonschot, R. G., Xiong, K., & Tamaoka, K. (2018). Individual mentalizing ability boosts flexibility toward a linguistic marker of social distance: An ERP investigation. Journal of Neurolinguistics, 47, 1-15. doi:10.1016/j.jneuroling.2018.01.005.

    Abstract

    Sentence-final particles (SFPs) as bound morphemes in Japanese have no obvious effect on the truth conditions of a sentence. However, they encompass a diverse range of usages, from typical to atypical, according to the context and the interpersonal relationships in the specific situation. The most frequent particle,-ne, is typically used after addressee-oriented propositions for information sharing, while another frequent particle,-yo, is typically used after addresser-oriented propositions to elicit a sense of strength. This study sheds light on individual differences among native speakers in flexibly understanding such linguistic markers based on their mentalizing ability (i.e., the ability to infer the mental states of others). Two experiments employing electroencephalography (EEG) consistently showed enhanced early posterior negativities (EPN) for atypical SFP usage compared to typical usage, especially when understanding-ne compared to -yo, in both an SFP appropriateness judgment task and a content comprehension task. Importantly, the amplitude of the EPN for atypical usages of-ne was significantly higher in participants with lower mentalizing ability than in those with a higher mentalizing ability. This effect plausibly reflects low-ability mentalizers' stronger sense of strangeness toward atypical-ne usage. While high-ability mentalizers may aptly perceive others' attitudes via their various usages of-ne, low-ability mentalizers seem to adopt a more stereotypical understanding. These results attest to the greater degree of difficulty low-ability mentalizers have in establishing a smooth regulation of interpersonal distance during social encounters.

    Additional information

    stimuli dialog sets
  • De Kleijn, R., Wijnen, M., & Poletiek, F. H. (2019). The effect of context-dependent information and sentence constructions on perceived humanness of an agent in a Turing test. Knowledge-Based Systems, 163, 794-799. doi:10.1016/j.knosys.2018.10.006.

    Abstract

    In a Turing test, a judge decides whether their conversation partner is either a machine or human. What cues does the judge use to determine this? In particular, are presumably unique features of human language actually perceived as humanlike? Participants rated the humanness of a set of sentences that were manipulated for grammatical construction: linear right-branching or hierarchical center-embedded and their plausibility with regard to world knowledge.

    We found that center-embedded sentences are perceived as less humanlike than right-branching sentences and more plausible sentences are regarded as more humanlike. However, the effect of plausibility of the sentence on perceived humanness is smaller for center-embedded sentences than for right-branching sentences.

    Participants also rated a conversation with either correct or incorrect use of the context by the agent. No effect of context use was found. Also, participants rated a full transcript of either a real human or a real chatbot, and we found that chatbots were reliably perceived as less humanlike than real humans, in line with our expectation. We did, however, find individual differences between chatbots and humans.
  • Klein, W. (Ed.). (2004). Philologie auf neuen Wegen [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 136.
  • Klein, W. (Ed.). (2004). Universitas [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik (LiLi), 134.
  • Klein, W. (2004). Vom Wörterbuch zum digitalen lexikalischen System. Zeitschrift für Literaturwissenschaft und Linguistik, 136, 10-55.
  • Klein, W. (1995). A time-relational analysis of Russian aspect. Language, 71(4), 669-695.

Share this page