Publications

Displaying 501 - 600 of 1498
  • Heyselaar, E. (2017). Influences on the magnitude of syntactic priming. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Heyselaar, E., Segaert, K., Walvoort, S. J., Kessels, R. P., & Hagoort, P. (2017). The role of nondeclarative memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Neuropsychologia, 101, 97-105. doi:10.1016/j.neuropsychologia.2017.04.033.

    Abstract

    Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing patients with Korsakoff's amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education, and premorbid intelligence. Patients with Korsakoff's syndrome display deficits in all subdomains of declarative memory, yet their nondeclarative memory remains intact, making them an ideal patient group to determine which memory system supports syntactic priming. In line with the hypothesis that syntactic priming relies on nondeclarative memory, the patient group shows strong priming tendencies (12.6% passive structure repetition). Our healthy control group did not show a priming tendency, presumably due to cognitive interference between declarative and nondeclarative memory. We discuss the results in relation to amnesia, aging, and compensatory mechanisms.
  • Hibar, D. P., Adams, H. H. H., Jahanshad, N., Chauhan, G., Stein, J. L., Hofer, E., Rentería, M. E., Bis, J. C., Arias-Vasquez, A., Ikram, M. K., Desrivieres, S., Vernooij, M. W., Abramovic, L., Alhusaini, S., Amin, N., Andersson, M., Arfanakis, K., Aribisala, B. S., Armstrong, N. J., Athanasiu, L. and 312 moreHibar, D. P., Adams, H. H. H., Jahanshad, N., Chauhan, G., Stein, J. L., Hofer, E., Rentería, M. E., Bis, J. C., Arias-Vasquez, A., Ikram, M. K., Desrivieres, S., Vernooij, M. W., Abramovic, L., Alhusaini, S., Amin, N., Andersson, M., Arfanakis, K., Aribisala, B. S., Armstrong, N. J., Athanasiu, L., Axelsson, T., Beecham, A. H., Beiser, A., Bernard, M., Blanton, S. H., Bohlken, M. M., Boks, M. P., Bralten, J., Brickman, A. M., Carmichael, O., Chakravarty, M. M., Chen, Q., Ching, C. R. K., Chouraki, V., Cuellar-Partida, G., Crivello, F., den Brabander, A., Doan, N. T., Ehrlich, S., Giddaluru, S., Goldman, A. L., Gottesman, R. F., Grimm, O., Griswold, M. E., Guadalupe, T., Gutman, B. A., Hass, J., Haukvik, U. K., Hoehn, D., Holmes, A. J., Hoogman, M., Janowitz, D., Jia, T., Jørgensen, K. N., Mirza-Schreiber, N., Kasperaviciute, D., Kim, S., Klein, M., Krämer, B., Lee, P. H., Liewald, D. C. M., Lopez, L. M., Luciano, M., Macare, C., Marquand, A. F., Matarin, M., Mather, K. A., Mattheisen, M., McKay, D. R., Milaneschi, Y., Maniega, S. M., Nho, K., Nugent, A. C., Nyquist, P., Olde Loohuis, L. M., Oosterlaan, J., Papmeyer, M., Pirpamer, L., Pütz, B., Ramasamy, A., Richards, J. S., Risacher, S., Roiz-Santiañez, R., Rommelse, N., Ropele, S., Rose, E., Royle, N. A., Rundek, T., Sämann, P. G., Saremi, A., Satizabal, C. L., Schmaal, L., Schork, A. J., Shen, L., Shin, J., Shumskaya, E., Smith, A. V., Sprooten, E., Strike, L. T., Teumer, A., Tordesillas-Gutierrez, D., Toro, R., Trabzuni, D., Trompet, S., Vaidya, D., Van der Grond, J., Van der Lee, S. J., Van der Meer, D., Van Donkelaar, M. M. J., Van Eijk, K. R., van Erp, T. G. M., Van Rooij, D., Walton, E., Westlye, L. T., Whelan, C. D., Windham, B. G., Winkler, A. M., Wittfeld, K. M., Woldehawariat, G., Wolf, C., Wolfers, T., Yanek, L. R., Yang, J., Zijdenbos, A., Zwiers, M. P., Agartz, I., Almasy, L., Ames, D., Amouyel, P., Andreassen, O. A., Arepalli, S., Assareh, A. A., Barral, S., Bastin, M. E., Becker, D. M., Becker, J. T., Bennett, D. A., Blangero, J., Van Bokhoven, H., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Brunner, H. G., Buckner, R. L., Buitelaar, J. K., Bulayeva, K. B., Cahn, W., Calhoun, V. D., Cannon, D. M., Cavalleri, G. L., Cheng, C.-Y., Cichon, S., Cookson, M. R., Corvin, A., Crespo-Facorro, B., Curran, J. E., Czisch, M., Dale, A. M., Davies, G. E., De Craen, A. J. M., De Geus, E. J. C., De Jager, P. L., De Zubicaray, G. i., Deary, I. J., Debette, S., DeCarli, C., Delanty, N., Depondt, C., DeStefano, A., Dillman, A., Djurovic, S., Donohoe, G., Drevets, W. C., Duggirala, R., Dyer, T. D., Enzinger, C., Erk, S., Espeseth, T., Fedko, I. O., Fernández, G., Ferrucci, L., Fisher, S. E., Fleischman, D. A., Ford, I., Fornage, M., Foroud, T. M., Fox, P. T., Francks, C., Fukunaga, M., Gibbs, J. R., Glahn, D. C., Gollub, R. L., Göring, H. H. H., Green, R. C., Gruber, O., Gudnason, V., Guelfi, S., Haberg, A. K., Hansell, N. K., Hardy, J., Hartman, C. A., Hashimoto, R., Hegenscheid, K., Heinz, A., Le Hellard, S., Hernandez, D. G., Heslenfeld, D. J., Ho, B.-C., Hoekstra, P. J., Hoffmann, W., Hofman, A., Holsboer, F., Homuth, G., Hosten, N., Hottenga, J.-J., Huentelman, M., Pol, H. E. H., Ikeda, M., Jack Jr., C. R., Jenkinson, M., Johnson, R., Jonsson, E. G., Jukema, J. W., Kahn, R. S., Kanai, R., Kloszewska, I., Knopman, D. S., Kochunov, P., Kwok, J. B., Lawrie, S. M., Lemaître, H., Liu, X., Longo, D. L., Lopez, O. L., Lovestone, S., Martinez, O., Martinot, J.-L., Mattay, V. S., McDonald, C., Mcintosh, A. M., McMahon, F., McMahon, K. L., Mecocci, P., Melle, I., Meyer-Lindenberg, A., Mohnke, S., Montgomery, G. W., Morris, D. W., Mosley, T. H., Mühleisen, T. W., Müller-Myhsok, B., Nalls, M. A., Nauck, M., Nichols, T. E., Niessen, W. J., Nöthen, M. M., Nyberg, L., Ohi, K., Olvera, R. L., Ophoff, R. A., Pandolfo, M., Paus, T., Pausova, Z., Penninx, B. W. J. H., Pike, G. B., Potkin, S. G., Psaty, B. M., Reppermund, S., Rietschel, M., Roffman, J. L., Romanczuk-Seiferth, N., Rotter, J. I., Ryten, M., Sacco, R. L., Sachdev, P. S., Saykin, A. J., Schmidt, R., Schmidt, H., Schofield, P. R., Sigursson, S., Simmons, A., Singleton, A., Sisodiya, S. M., Smith, C., Smoller, J. W., Soininen, H., Steen, V. M., Stott, D. J., Sussmann, J. E., Thalamuthu, A., Toga, A. W., Traynor, B. J., Troncoso, J., Tsolaki, M., Tzourio, C., Uitterlinden, A. G., Hernández, M. C. V., Van der Brug, M., Van der Lugt, A., Van der Wee, N. J. A., Van Haren, N. E. M., Van Tol, M.-J., Vardarajan, B. N., Vellas, B., Veltman, D. J., Völzke, H., Walter, H., Wardlaw, J. M., Wassink, T. H., Weale, M. e., Weinberger, D. R., Weiner, M., Wen, W., Westman, E., White, T., Wong, T. Y., Wright, C. B., Zielke, R. H., Zonderman, A. B., Martin, N. G., Van Duijn, C. M., Wright, M. J., Longstreth, W. W. T., Schumann, G., Grabe, H. J., Franke, B., Launer, L. J., Medland, S. E., Seshadri, S., Thompson, P. M., & Ikram, A. (2017). Novel genetic loci associated with hippocampal volume. Nature Communications, 8: 13624. doi:10.1038/ncomms13624.

    Abstract

    The hippocampal formation is a brain structure integrally involved in episodic memory, spatial navigation, cognition and stress responsiveness. Structural abnormalities in hippocampal volume and shape are found in several common neuropsychiatric disorders. To identify the genetic underpinnings of hippocampal structure here we perform a genome-wide association study (GWAS) of 33,536 individuals and discover six independent loci significantly associated with hippocampal volume, four of them novel. Of the novel loci, three lie within genes (ASTN2, DPP4 and MAST4) and one is found 200 kb upstream of SHH. A hippocampal subfield analysis shows that a locus within the MSRB3 gene shows evidence of a localized effect along the dentate gyrus, subiculum, CA1 and fissure. Further, we show that genetic variants associated with decreased hippocampal volume are also associated with increased risk for Alzheimer’s disease (rg=−0.155). Our findings suggest novel biological pathways through which human genetic variation influences hippocampal volume and risk for neuropsychiatric illness.

    Additional information

    ncomms13624-s1.pdf ncomms13624-s2.xlsx
  • Hill, C. (2010). Emergency language documentation teams: The Cape York Peninsula experience. In J. Hobson, K. Lowe, S. Poetsch, & M. Walsh (Eds.), Re-awakening languages: Theory and practice in the revitalisation of Australia’s Indigenous languages (pp. 418-432). Sydney: Sydney University Press.
  • Hill, C. (2010). [Review of the book Discourse and Grammar in Australian Languages ed. by Ilana Mushin and Brett Baker]. Studies in Language, 34(1), 215-225. doi:10.1075/sl.34.1.12hil.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2017). Predictors of verb-mediated anticipatory eye movements in the visual world. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(9), 1352-1374. doi:10.1037/xlm0000388.

    Abstract

    Many studies have demonstrated that listeners use information extracted from verbs to guide anticipatory eye movements to objects in the visual context that satisfy the selection restrictions of the verb. An important question is what underlies such verb-mediated anticipatory eye gaze. Based on empirical and theoretical suggestions, we investigated the influence of five potential predictors of this behavior: functional associations and general associations between verb and target object, as well as the listeners’ production fluency, receptive vocabulary knowledge, and non-verbal intelligence. In three eye-tracking experiments, participants looked at sets of four objects and listened to sentences where the final word was predictable or not predictable (e.g., “The man peels/draws an apple”). On predictable trials only the target object, but not the distractors, were functionally and associatively related to the verb. In Experiments 1 and 2, objects were presented before the verb was heard. In Experiment 3, participants were given a short preview of the display after the verb was heard. Functional associations and receptive vocabulary were found to be important predictors of verb-mediated anticipatory eye gaze independent of the amount of contextual visual input. General word associations did not and non-verbal intelligence was only a very weak predictor of anticipatory eye movements. Participants’ production fluency correlated positively with the likelihood of anticipatory eye movements when participants were given the long but not the short visual display preview. These findings fit best with a pluralistic approach to predictive language processing in which multiple mechanisms, mediating factors, and situational context dynamically interact. 
  • Hintz, F., Voeten, C. C., McQueen, J. M., & Scharenborg, O. (2021). The effects of onset and offset masking on the time course of non-native spoken-word recognition in noise. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 133-139). Vienna: Cognitive Science Society.

    Abstract

    Using the visual-word paradigm, the present study investigated the effects of word onset and offset masking on the time course of non-native spoken-word recognition in the presence of background noise. In two experiments, Dutch non-native listeners heard English target words, preceded by carrier sentences that were noise-free (Experiment 1) or contained intermittent noise (Experiment 2). Target words were either onset- or offset-masked or not masked at all. Results showed that onset masking delayed target word recognition more than offset masking did, suggesting that – similar to natives – non-native listeners strongly rely on word onset information during word recognition in noise.

    Additional information

    Link to Preprint on BioRxiv
  • Hintz, F. (2010). Speech and speaker recognition in dyslexic individuals. Bachelor Thesis, Max Planck Institute for Human Cognitive and Brain Sciences (Leipzig)/University of Leipzig.
  • Hirschmann, J., Schoffelen, J.-M., Schnitzler, A., & Van Gerven, M. A. J. (2017). Parkinsonian rest tremor can be detected accurately based on neuronal oscillations recorded from the subthalamic nucleus. Clinical Neurophysiology, 128, 2029-2036. doi:10.1016/j.clinph.2017.07.419.

    Abstract

    Objective: To investigate the possibility of tremor detection based on deep brain activity.
    Methods: We re-analyzed recordings of local field potentials (LFPs) from the subthalamic nucleus in 10
    PD patients (12 body sides) with spontaneously fluctuating rest tremor. Power in several frequency bands
    was estimated and used as input to Hidden Markov Models (HMMs) which classified short data segments
    as either tremor-free rest or rest tremor. HMMs were compared to direct threshold application to individual
    power features.
    Results: Applying a threshold directly to band-limited power was insufficient for tremor detection (mean
    area under the curve [AUC] of receiver operating characteristic: 0.64, STD: 0.19). Multi-feature HMMs, in
    contrast, allowed for accurate detection (mean AUC: 0.82, STD: 0.15), using four power features obtained
    from a single contact pair. Within-patient training yielded better accuracy than across-patient training
    (0.84 vs. 0.78, p = 0.03), yet tremor could often be detected accurately with either approach. High frequency
    oscillations (>200 Hz) were the best performing individual feature.
    Conclusions: LFP-based markers of tremor are robust enough to allow for accurate tremor detection in
    short data segments, provided that appropriate statistical models are used.
    Significance: LFP-based markers of tremor could be useful control signals for closed-loop deep brain
    stimulation.
  • Hoedemaker, R. S., & Gordon, P. C. (2017). The onset and time course of semantic priming during rapid recognition of visual words. Journal of Experimental Psychology: Human Perception and Performance, 43(5), 881-902. doi:10.1037/xhp0000377.

    Abstract

    In 2 experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response
    tasks. In Experiment 1 (ocular lexical decision task), participants performed a lexical decision task using eye movement responses on a sequence of 4 words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a metalinguistic judgment. For both tasks, survival analyses showed that the earliest observable effect (divergence point [DP]) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these
    distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment;
    manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective, rather than a prospective, priming mechanism and are consistent with compound-cue models of semantic priming.
  • Hoedemaker, R. S., Ernst, J., Meyer, A. S., & Belke, E. (2017). Language production in a shared task: Cumulative semantic interference from self- and other-produced context words. Acta Psychologica, 172, 55-63. doi:10.1016/j.actpsy.2016.11.007.

    Abstract

    This study assessed the effects of semantic context in the form of self-produced and other-produced words on subsequent language production. Pairs of participants performed a joint picture naming task, taking turns while naming a continuous series of pictures. In the single-speaker version of this paradigm, naming latencies have been found to increase for successive presentations of exemplars from the same category, a phenomenon known as Cumulative Semantic Interference (CSI). As expected, the joint-naming task showed a within-speaker CSI effect, such that naming latencies increased as a function of the number of category exemplars named previously by the participant (self-produced items). Crucially, we also observed an across-speaker CSI effect, such that naming latencies slowed as a function of the number of category members named by the participant's task partner (other-produced items). The magnitude of the across-speaker CSI effect did not vary as a function of whether or not the listening participant could see the pictures their partner was naming. The observation of across-speaker CSI suggests that the effect originates at the conceptual level of the language system, as proposed by Belke's (2013) Conceptual Accumulation account. Whereas self-produced and other-produced words both resulted in a CSI effect on naming latencies, post-experiment free recall rates were higher for self-produced than other-produced items. Together, these results suggest that both speaking and listening result in implicit learning at the conceptual level of the language system but that these effects are independent of explicit learning as indicated by item recall.
  • Hoeksema, N., Verga, L., Mengede, J., Van Roessel, C., Villanueva, S., Salazar-Casals, A., Rubio-Garcia, A., Curcic-Blake, B., Vernes, S. C., & Ravignani, A. (2021). Neuroanatomy of the grey seal brain: Bringing pinnipeds into the neurobiological study of vocal learning. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200252. doi:10.1098/rstb.2020.0252.

    Abstract

    Comparative studies of vocal learning and vocal non-learning animals can increase our understanding of the neurobiology and evolution of vocal learning and human speech. Mammalian vocal learning is understudied: most research has either focused on vocal learning in songbirds or its absence in non-human primates. Here we focus on a highly promising model species for the neurobiology of vocal learning: grey seals. We provide a neuroanatomical atlas (based on dissected brain slices and magnetic resonance images), a labelled MRI template, a 3D model with volumetric measurements of brain regions, and histological cortical stainings. Four main features of the grey seal brain stand out. (1) It is relatively big and highly convoluted. (2) It hosts a relatively large temporal lobe and cerebellum, structures which could support developed timing abilities and acoustic processing. (3) The cortex is similar to humans in thickness and shows the expected six-layered mammalian structure. (4) Expression of FoxP2 - a gene involved in vocal learning and spoken language - is present in deeper layers of the cortex. Our results could facilitate future studies targeting the neural and genetic underpinnings of mammalian vocal learning, thus bridging the research gap from songbirds to humans and non-human primates.Competing Interest StatementThe authors have declared no competing interest.
  • Hoey, E. (2017). [Review of the book Temporality in Interaction]. Studies in Language, 41(1), 232-238. doi:10.1075/sl.41.1.08hoe.
  • Hoey, E. (2017). Lapse organization in interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Hoey, E. (2017). Sequence recompletion: A practice for managing lapses in conversation. Journal of Pragmatics, 109, 47-63. doi:10.1016/j.pragma.2016.12.008.

    Abstract

    Conversational interaction occasionally lapses as topics become exhausted or as participants are left with no obvious thing to talk about next. In this article I look at episodes of ordinary conversation to examine how participants resolve issues of speakership and sequentiality in lapse environments. In particular, I examine one recurrent phenomenon—sequence recompletion—whereby participants bring to completion a sequence of talk that was already treated as complete. Using conversation analysis, I describe four methods for sequence recompletion: turn-exiting, action redoings, delayed replies, and post-sequence transitions. With this practice, participants use verbal and vocal resources to locally manage their participation framework when ending one course of action and potentially starting up a new one
  • Hoey, E., Hömke, P., Löfgren, E., Neumann, T., Schuerman, W. L., & Kendrick, K. H. (2021). Using expletive insertion to pursue and sanction in interaction. Journal of Sociolinguistics, 25(1), 3-25. doi:10.1111/josl.12439.

    Abstract

    This article uses conversation analysis to examine constructions like who the fuck is that—sequence‐initiating actions into which an expletive like the fuck has been inserted. We describe how this turn‐constructional practice fits into and constitutes a recurrent sequence of escalating actions. In this sequence, it is used to pursue an adequate response after an inadequate one was given, and sanction the recipient for that inadequate response. Our analysis contributes to sociolinguistic studies of swearing by offering an account of swearing as a resource for social action.
  • Holler, J., Alday, P. M., Decuyper, C., Geiger, M., Kendrick, K. H., & Meyer, A. S. (2021). Competition reduces response times in multiparty conversation. Frontiers in Psychology, 12: 693124. doi:10.3389/fpsyg.2021.693124.

    Abstract

    Natural conversations are characterized by short transition times between turns. This holds in particular for multi-party conversations. The short turn transitions in everyday conversations contrast sharply with the much longer speech onset latencies observed in laboratory studies where speakers respond to spoken utterances. There are many factors that facilitate speech production in conversational compared to laboratory settings. Here we highlight one of them, the impact of competition for turns. In multi-party conversations, speakers often compete for turns. In quantitative corpus analyses of multi-party conversation, the fastest response determines the recorded turn transition time. In contrast, in dyadic conversations such competition for turns is much less likely to arise, and in laboratory experiments with individual participants it does not arise at all. Therefore, all responses tend to be recorded. Thus, competition for turns may reduce the recorded mean turn transition times in multi-party conversations for a simple statistical reason: slow responses are not included in the means. We report two studies illustrating this point. We first report the results of simulations showing how much the response times in a laboratory experiment would be reduced if, for each trial, instead of recording all responses, only the fastest responses of several participants responding independently on the trial were recorded. We then present results from a quantitative corpus analysis comparing turn transition times in dyadic and triadic conversations. There was no significant group size effect in question-response transition times, where the present speaker often selects the next one, thus reducing competition between speakers. But, as predicted, triads showed shorter turn transition times than dyads for the remaining turn transitions, where competition for the floor was more likely to arise. Together, these data show that turn transition times in conversation should be interpreted in the context of group size, turn transition type, and social setting.
  • Holler, J., & Bavelas, J. (2017). Multi-modal communication of common ground: A review of social functions. In R. B. Church, M. W. Alibali, & S. D. Kelly (Eds.), Why gesture? How the hands function in speaking, thinking and communicating (pp. 213-240). Amsterdam: Benjamins.

    Abstract

    Until recently, the literature on common ground depicted its influence as a purely verbal phenomenon. We review current research on how common ground influences gesture. With informative exceptions, most experiments found that speakers used fewer gestures as well as fewer words in common ground contexts; i.e., the gesture/word ratio did not change. Common ground often led to more poorly articulated gestures, which parallels its effect on words. These findings support the principle of recipient design as well as more specific social functions such as grounding, the given-new contract, and Grice’s maxims. However, conceptual pacts or linking old with new information may maintain the original form. All together, these findings implicate gesture-speech ensembles rather than isolated effects on gestures alone.
  • Holler, J. (2010). Speakers’ use of interactive gestures to mark common ground. In S. Kopp, & I. Wachsmuth (Eds.), Gesture in embodied communication and human-computer interaction. 8th International Gesture Workshop, Bielefeld, Germany, 2009; Selected Revised Papers (pp. 11-22). Heidelberg: Springer Verlag.
  • Holler, J. (2004). Semantic and pragmatic aspects of representational gestures: Towards a unified model of communication in talk. PhD Thesis, University of Manchester, Manchester.
  • Holler, J., & Beattie, G. (2004). The interaction of iconic gesture and speech. In A. Cammurri, & G. Volpe (Eds.), Lecture Notes in Computer Science, 5th International Gesture Workshop, Genova, Italy, 2003; Selected Revised Papers (pp. 63-69). Heidelberg: Springer Verlag.
  • Hömke, P., Holler, J., & Levinson, S. C. (2017). Eye blinking as addressee feedback in face-to-face conversation. Research on Language and Social Interaction, 50, 54-70. doi:10.1080/08351813.2017.1262143.

    Abstract

    Does blinking function as a type of feedback in conversation? To address this question, we built a corpus of Dutch conversations, identified short and long addressee blinks during extended turns, and measured their occurrence relative to the end of turn constructional units (TCUs), the location
    where feedback typically occurs. Addressee blinks were indeed timed to the
    end of TCUs. Also, long blinks were more likely than short blinks to occur
    during mutual gaze, with nods or continuers, and their occurrence was
    restricted to sequential contexts in which signaling understanding was
    particularly relevant, suggesting a special signaling capacity of long blinks.
  • Horan Skilton, A., & Peeters, D. (2021). Cross-linguistic differences in demonstrative systems: Comparing spatial and non-spatial influences on demonstrative use in Ticuna and Dutch. Journal of Pragmatics, 180, 248-265. doi:10.1016/j.pragma.2021.05.001.

    Abstract

    In all spoken languages, speakers use demonstratives – words like this and that – to refer to entities in their immediate environment. But which factors determine whether they use one demonstrative (this) or another (that)? Here we report the results of an experiment examining the effects of referent visibility, referent distance, and addressee location on the production of demonstratives by speakers of Ticuna (isolate; Brazil, Colombia, Peru), an Amazonian language with four demonstratives, and speakers of Dutch (Indo-European; Netherlands, Belgium), which has two demonstratives. We found that Ticuna speakers’ use of demonstratives displayed effects of addressee location and referent distance, but not referent visibility. By contrast, under comparable conditions, Dutch speakers displayed sensitivity only to referent distance. Interestingly, we also observed that Ticuna speakers consistently used demonstratives in all referential utterances in our experimental paradigm, while Dutch speakers strongly preferred to use definite articles. Taken together, these findings shed light on the significant diversity found in demonstrative systems across languages. Additionally, they invite researchers studying exophoric demonstratives to broaden their horizons by cross-linguistically investigating the factors involved in speakers’ choice of demonstratives over other types of referring expressions, especially articles.
  • Horemans, I., & Schiller, N. O. (2004). Form-priming effects in nonword naming. Brain and Language, 90(1-3), 465-469. doi:10.1016/S0093-934X(03)00457-7.

    Abstract

    Form-priming effects from sublexical (syllabic or segmental) primes in masked priming can be accounted for in two ways. One is the sublexical pre-activation view according to which segments are pre-activated by the prime, and at the time the form-related target is to be produced, retrieval/assembly of those pre-activated segments is faster compared to an unrelated situation. However, it has also been argued that form-priming effects from sublexical primes might be due to lexical pre-activation. When the sublexical prime is presented, it activates all form-related words (i.e., cohorts) in the lexicon, necessarily including the form-related target, which—as a consequence—is produced faster than in the unrelated case. Note, however, that this lexical pre-activation account makes previous pre-lexical activation of segments necessary. This study reports a nonword naming experiment to investigate whether or not sublexical pre-activation is involved in masked form priming with sublexical primes. The results demonstrated a priming effect suggesting a nonlexical effect. However, this does not exclude an additional lexical component in form priming.
  • Hörpel, S. G., Baier, L., Peremans, H., Reijniers, J., Wiegrebe, L., & Firzlaff, U. (2021). Communication breakdown: Limits of spectro-temporal resolution for the perception of bat communication calls. Scientific Reports, 11: 13708. doi:10.1038/s41598-021-92842-4.

    Abstract

    During vocal communication, the spectro‑temporal structure of vocalizations conveys important
    contextual information. Bats excel in the use of sounds for echolocation by meticulous encoding of
    signals in the temporal domain. We therefore hypothesized that for social communication as well,
    bats would excel at detecting minute distortions in the spectro‑temporal structure of calls. To test
    this hypothesis, we systematically introduced spectro‑temporal distortion to communication calls of
    Phyllostomus discolor bats. We broke down each call into windows of the same length and randomized
    the phase spectrum inside each window. The overall degree of spectro‑temporal distortion in
    communication calls increased with window length. Modelling the bat auditory periphery revealed
    that cochlear mechanisms allow discrimination of fast spectro‑temporal envelopes. We evaluated
    model predictions with experimental psychophysical and neurophysiological data. We first assessed
    bats’ performance in discriminating original versions of calls from increasingly distorted versions of
    the same calls. We further examined cortical responses to determine additional specializations for
    call discrimination at the cortical level. Psychophysical and cortical responses concurred with model
    predictions, revealing discrimination thresholds in the range of 8–15 ms randomization‑window
    length. Our data suggest that specialized cortical areas are not necessary to impart psychophysical
    resilience to temporal distortion in communication calls.

    Additional information

    supplementary information
  • Howarth, H., Sommer, V., & Jordan, F. (2010). Visual depictions of female genitalia differ depending on source. Medical Humanities, 36, 75-79. doi:10.1136/jmh.2009.003707.

    Abstract

    Very little research has attempted to describe normal human variation in female genitalia, and no studies have compared the visual images that women might use in constructing their ideas of average and acceptable genital morphology to see if there are any systematic differences. Our objective was to determine if visual depictions of the vulva differed according to their source so as to alert medical professionals and their patients to how these depictions might capture variation and thus influence perceptions of "normality". We conducted a comparative analysis by measuring (a) published visual materials from human anatomy textbooks in a university library, (b) feminist publications (both print and online) depicting vulval morphology, and (c) online pornography, focusing on the most visited and freely accessible sites in the UK. Post-hoc tests showed that labial protuberance was significantly less (p < .001, equivalent to approximately 7 mm) in images from online pornography compared to feminist publications. All five measures taken of vulval features were significantly correlated (p < .001) in the online pornography sample, indicating a less varied range of differences in organ proportions than the other sources where not all measures were correlated. Women and health professionals should be aware that specific sources of imagery may depict different types of genital morphology and may not accurately reflect true variation in the population, and consultations for genital surgeries should include discussion about the actual and perceived range of variation in female genital morphology.
  • Hoymann, G. (2004). [Review of the book Botswana: The future of the minority languages ed. by Herman M. Batibo and Birgit Smieja]. Journal of African Languages and Linguistics, 25(2), 171-173. doi:10.1515/jall.2004.25.2.171.
  • Hoymann, G. (2010). Questions and responses in ╪Ākhoe Hai||om. Journal of Pragmatics, 42(10), 2726-2740. doi:10.1016/j.pragma.2010.04.008.

    Abstract

    This paper examines ╪Ākhoe Hai||om, a Khoe language of the Khoisan family spoken in Northern Namibia. I document the way questions are posed in natural conversation, the actions the questions are used for and the manner in which they are responded to. I show that in this language speakers rely most heavily on content questions. I also find that speakers of ╪Ākhoe Hai||om address fewer questions to a specific individual than would be expected from prior research on Indo European languages. Finally, I discuss some possible explanations for these findings.
  • Huettig, F., & Altmann, G. T. M. (2004). The online processing of ambiguous and unambiguous words in context: Evidence from head-mounted eye-tracking. In M. Carreiras, & C. Clifton (Eds.), The on-line study of sentence comprehension: Eyetracking, ERP and beyond (pp. 187-207). New York: Psychology Press.
  • Huettig, F., Chen, J., Bowerman, M., & Majid, A. (2010). Do language-specific categories shape conceptual processing? Mandarin classifier distinctions influence eye gaze behavior, but only during linguistic processing. Journal of Cognition and Culture, 10(1/2), 39-58. doi:10.1163/156853710X497167.

    Abstract

    In two eye-tracking studies we investigated the influence of Mandarin numeral classifiers - a grammatical category in the language - on online overt attention. Mandarin speakers were presented with simple sentences through headphones while their eye-movements to objects presented on a computer screen were monitored. The crucial question is what participants look at while listening to a pre-specified target noun. If classifier categories influence Mandarin speakers' general conceptual processing, then on hearing the target noun they should look at objects that are members of the same classifier category - even when the classifier is not explicitly present (cf. Huettig & Altmann, 2005). The data show that when participants heard a classifier (e.g., ba3, Experiment 1) they shifted overt attention significantly more to classifier-match objects (e.g., chair) than to distractor objects. But when the classifier was not explicitly presented in speech, overt attention to classifier-match objects and distractor objects did not differ (Experiment 2). This suggests that although classifier distinctions do influence eye-gaze behavior, they do so only during linguistic processing of that distinction and not in moment-to-moment general conceptual processing.
  • Huettig, F., Mishra, R. K., & Padakannaya, P. (2017). Editorial. Journal of Cultural Cognitive Science, 1( 1), 1. doi:10.1007/s41809-017-0006-2.
  • Huettig, F., & Hartsuiker, R. J. (2010). Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception. Language and Cognitive Processes, 3, 347 -374. doi:10.1080/01690960903046926.

    Abstract

    Theories of verbal self-monitoring generally assume an internal (pre-articulatory) monitoring channel, but there is debate about whether this channel relies on speech perception or on production-internal mechanisms. Perception-based theories predict that listening to one's own inner speech has similar behavioral consequences as listening to someone else's speech. Our experiment therefore registered eye-movements while speakers named objects accompanied by phonologically related or unrelated written words. The data showed that listening to one's own speech drives eye-movements to phonologically related words, just as listening to someone else's speech does in perception experiments. The time-course of these eye-movements was very similar to that in other-perception (starting 300 ms post-articulation), which demonstrates that these eye-movements were driven by the perception of overt speech, not inner speech. We conclude that external, but not internal monitoring, is based on speech perception.
  • Huisman, J. L. A., van Hout, R., & Majid, A. (2021). Patterns of semantic variation differ across body parts: evidence from the Japonic languages. Cognitive Linguistics, 32, 455-486. doi:10.1515/cog-2020-0079.

    Abstract

    The human body is central to myriad metaphors, so studying the conceptualisation of the body itself is critical if we are to understand its broader use. One essential but understudied issue is whether languages differ in which body parts they single out for naming. This paper takes a multi-method approach to investigate body part nomenclature within a single language family. Using both a naming task (Study 1) and colouring-in task (Study 2) to collect data from six Japonic languages, we found that lexical similarity for body part terminology was notably differentiated within Japonic, and similar variation was evident in semantics too. Novel application of cluster analysis on naming data revealed a relatively flat hierarchical structure for parts of the face, whereas parts of the body were organised with deeper hierarchical structure. The colouring data revealed that bounded parts show more stability across languages than unbounded parts. Overall, the data reveal there is not a single universal conceptualisation of the body as is often assumed, and that in-depth, multi-method explorations of under-studied languages are urgently required.
  • Huisman, J. L. A. (2021). Variation in form and meaning across the Japonic language family: With a focus on the Ryukyuan languages. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Huizeling, E., Wang, H., Holland, C., & Kessler, K. (2021). Changes in theta and alpha oscillatory signatures of attentional control in older and middle age. European Journal of Neuroscience, 54(1), 4314-4337. doi:10.1111/ejn.15259.

    Abstract

    Recent behavioural research has reported age-related changes in the costs of refocusing attention from a temporal (rapid serial visual presentation) to a spatial (visual search) task. Using magnetoencephalography, we have now compared the neural signatures of attention refocusing between three age groups (19–30, 40–49 and 60+ years) and found differences in task-related modulation and cortical localisation of alpha and theta oscillations. Efficient, faster refocusing in the youngest group compared to both middle age and older groups was reflected in parietal theta effects that were significantly reduced in the older groups. Residual parietal theta activity in older individuals was beneficial to attentional refocusing and could reflect preserved attention mechanisms. Slowed refocusing of attention, especially when a target required consolidation, in the older and middle-aged adults was accompanied by a posterior theta deficit and increased recruitment of frontal (middle-aged and older groups) and temporal (older group only) areas, demonstrating a posterior to anterior processing shift. Theta but not alpha modulation correlated with task performance, suggesting that older adults' stronger and more widely distributed alpha power modulation could reflect decreased neural precision or dedifferentiation but requires further investigation. Our results demonstrate that older adults present with different alpha and theta oscillatory signatures during attentional control, reflecting cognitive decline and, potentially, also different cognitive strategies in an attempt to compensate for decline.

    Additional information

    supplementary material
  • Hulten, A., Laaksonen, H., Vihla, M., Laine, M., & Salmelin, R. (2010). Modulation of brain activity after learning predicts long-term memory for words. Journal of Neuroscience, 30(45), 15160-15164. doi:10.1523/​JNEUROSCI.1278-10.2010.

    Abstract

    The acquisition and maintenance of new language information, such as picking up new words, is a critical human ability that is needed throughout the life span. Most likely you learned the word “blog” quite recently as an adult, whereas the word “kipe,” which in the 1970s denoted stealing, now seems unfamiliar. Brain mechanisms underlying the long-term maintenance of new words have remained unknown, albeit they could provide important clues to the considerable individual differences in the ability to remember words. After successful training of a set of novel object names we tracked, over a period of 10 months, the maintenance of this new vocabulary in 10 human participants by repeated behavioral tests and magnetoencephalography measurements of overt picture naming. When namingrelated activation in the left frontal and temporal cortex was enhanced 1 week after training, compared with the level at the end of training, the individual retained a good command of the new vocabulary at 10 months; vice versa, individuals with reduced activation at 1 week posttraining were less successful in recalling the names at 10 months. This finding suggests an individual neural marker for memory, in the context of language. Learning is not over when the acquisition phase has been successfully completed: neural events during the access to recently established word representations appear to be important for the long-term outcome of learning.
  • Hulten, A. (2010). Sanan tuottaminen [Word production]. In Kieli ja aivot [Language and the Brain - Textbook series] (pp. 106-116).
  • Humphries, S., Holler*, J., Crawford, T., & Poliakoff*, E. (2021). Cospeech gestures are a window into the effects of Parkinson’s disease on action representations. Journal of Experimental Psychology: General, 150(8), 1581-1597. doi:10.1037/xge0001002.

    Abstract

    -* indicates joint senior authors - Parkinson’s disease impairs motor function and cognition, which together affect language and
    communication. Co-speech gestures are a form of language-related actions that provide imagistic
    depictions of the speech content they accompany. Gestures rely on visual and motor imagery, but
    it is unknown whether gesture representations require the involvement of intact neural sensory
    and motor systems. We tested this hypothesis with a fine-grained analysis of co-speech action
    gestures in Parkinson’s disease. 37 people with Parkinson’s disease and 33 controls described
    two scenes featuring actions which varied in their inherent degree of bodily motion. In addition
    to the perspective of action gestures (gestural viewpoint/first- vs. third-person perspective), we
    analysed how Parkinson’s patients represent manner (how something/someone moves) and path
    information (where something/someone moves to) in gesture, depending on the degree of bodily
    motion involved in the action depicted. We replicated an earlier finding that people with
    Parkinson’s disease are less likely to gesture about actions from a first-person perspective – preferring instead to depict actions gesturally from a third-person perspective – and show that
    this effect is modulated by the degree of bodily motion in the actions being depicted. When
    describing high motion actions, the Parkinson’s group were specifically impaired in depicting
    manner information in gesture and their use of third-person path-only gestures was significantly
    increased. Gestures about low motion actions were relatively spared. These results inform our
    understanding of the neural and cognitive basis of gesture production by providing
    neuropsychological evidence that action gesture production relies on intact motor network
    function.

    Additional information

    Open data and code
  • Hustá, C., Zheng, X., Papoutsi, C., & Piai, V. (2021). Electrophysiological signatures of conceptual and lexical retrieval from semantic memory. Neuropsychologia, 161: 107988. doi:10.1016/j.neuropsychologia.2021.107988.

    Abstract

    Retrieval from semantic memory of conceptual and lexical information is essential for producing speech. It is unclear whether there are differences in the neural mechanisms of conceptual and lexical retrieval when spreading activation through semantic memory is initiated by verbal or nonverbal settings. The same twenty participants took part in two EEG experiments. The first experiment examined conceptual and lexical retrieval following nonverbal settings, whereas the second experiment was a replication of previous studies examining conceptual and lexical retrieval following verbal settings. Target pictures were presented after constraining and nonconstraining contexts. In the nonverbal settings, contexts were provided as two priming pictures (e.g., constraining: nest, feather; nonconstraining: anchor, lipstick; target picture: BIRD). In the verbal settings, contexts were provided as sentences (e.g., constraining: “The farmer milked a...”; nonconstraining: “The child drew a...”; target picture: COW). Target pictures were named faster following constraining contexts in both experiments, indicating that conceptual preparation starts before target picture onset in constraining conditions. In the verbal experiment, we replicated the alpha-beta power decreases in constraining relative to nonconstraining conditions before target picture onset. No such power decreases were found in the nonverbal experiment. Power decreases in constraining relative to nonconstraining conditions were significantly different between experiments. Our findings suggest that participants engage in conceptual preparation following verbal and nonverbal settings, albeit differently. The retrieval of a target word, initiated by verbal settings, is associated with alpha-beta power decreases. By contrast, broad conceptual preparation alone, prompted by nonverbal settings, does not seem enough to elicit alpha-beta power decreases. These findings have implications for theories of oscillations and semantic memory.

    Additional information

    1-s2.0-S0028393221002414-mmc1.pdf
  • Iacozza, S., Costa, A., & Duñabeitia, J. A. (2017). What do your eyes reveal about your foreign language? Reading emotional sentences in a native and foreign language. PLoS One, 12(10): e0186027. doi:10.1371/journal.pone.0186027.

    Abstract

    Foreign languages are often learned in emotionally neutral academic environments which differ greatly from the familiar context where native languages are acquired. This difference in learning contexts has been argued to lead to reduced emotional resonance when confronted with a foreign language. In the current study, we investigated whether the reactivity of the sympathetic nervous system in response to emotionally-charged stimuli is reduced in a foreign language. To this end, pupil sizes were recorded while reading aloud emotional sentences in the native or foreign language. Additionally, subjective ratings of emotional impact were provided after reading each sentence, allowing us to further investigate foreign language effects on explicit emotional understanding. Pupillary responses showed a larger effect of emotion in the native than in the foreign language. However, such a difference was not present for explicit ratings of emotionality. These results reveal that the sympathetic nervous system reacts differently depending on the language context, which in turns suggests a deeper emotional processing when reading in a native compared to a foreign language.

    Additional information

    pone.0186027.s001.docx
  • Ille, S., Ohlerth, A.-K., Colle, D., Colle, H., Dragoy, O., Goodden, J., Robe, P., Rofes, A., Mandonnet, E., Robert, E., Satoer, D., Viegas, C., Visch-Brink, E., van Zandvoort, M., & Krieg, S. (2021). Augmented reality for the virtual dissection of white matter pathways. Acta Neurochirurgica, (4), 895-903. doi:10.1007/s00701-019-04159-x.

    Abstract

    Background The human white matter pathway network is complex and of critical importance for functionality. Thus, learning
    and understanding white matter tract anatomy is important for the training of neuroscientists and neurosurgeons. The study aims
    to test and evaluate a new method for fiber dissection using augmented reality (AR) in a group which is experienced in cadaver
    white matter dissection courses and in vivo tractography.
    Methods Fifteen neurosurgeons, neurolinguists, and neuroscientists participated in this questionnaire-based study. We presented
    five cases of patients with left-sided perisylvian gliomas who underwent awake craniotomy. Diffusion tensor imaging fiber
    tracking (DTI FT) was performed and the language-related networks were visualized separated in different tracts by color.
    Participants were able to virtually dissect the prepared DTI FTs using a spatial computer and AR goggles. The application
    was evaluated through a questionnaire with answers from 0 (minimum) to 10 (maximum).
    Results Participants rated the overall experience of AR fiber dissection with a median of 8 points (mean ± standard deviation 8.5 ± 1.4).
    Usefulness for fiber dissection courses and education in general was rated with 8 (8.3 ± 1.4) and 8 (8.1 ± 1.5) points, respectively.
    Educational value was expected to be high for several target audiences (student: median 9, 8.6 ± 1.4; resident: 9, 8.5 ± 1.8; surgeon: 9,
    8.2 ± 2.4; scientist: 8.5, 8.0 ± 2.4). Even clinical application of AR fiber dissection was expected to be of value with a median of 7
    points (7.0 ± 2.5)
  • Indefrey, P., & Cutler, A. (2004). Prelexical and lexical processing in listening. In M. Gazzaniga (Ed.), The cognitive neurosciences III. (pp. 759-774). Cambridge, MA: MIT Press.

    Abstract

    This paper presents a meta-analysis of hemodynamic studies on passive auditory language processing. We assess the overlap of hemodynamic activation areas and activation maxima reported in experiments involving the presentation of sentences, words, pseudowords, or sublexical or non-linguistic auditory stimuli. Areas that have been reliably replicated are identified. The results of the meta-analysis are compared to electrophysiological, magnetencephalic (MEG), and clinical findings. It is concluded that auditory language input is processed in a left posterior frontal and bilateral temporal cortical network. Within this network, no processing leve l is related to a single cortical area. The temporal lobes seem to differ with respect to their involvement in post-lexical processing, in that the left temporal lobe has greater involvement than the right, and also in the degree of anatomical specialization for phonological, lexical, and sentence -level processing, with greater overlap on the right contrasting with a higher degree of differentiation on the left.
  • Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144. doi:10.1016/j.cognition.2002.06.001.

    Abstract

    This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
  • Indefrey, P. (2004). Hirnaktivierungen bei syntaktischer Sprachverarbeitung: Eine Meta-Analyse. In H. Müller, & G. Rickheit (Eds.), Neurokognition der Sprache (pp. 31-50). Tübingen: Stauffenburg.
  • Indefrey, P., Hellwig, F. M., Herzog, H., Seitz, R. J., & Hagoort, P. (2004). Neural responses to the production and comprehension of syntax in identical utterances. Brain and Language, 89(2), 312-319. doi:10.1016/S0093-934X(03)00352-3.

    Abstract

    Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment, subjects described visually presented scenes
    using either sentences, sequences of noun phrases, or sequences of syntactically unrelated words. In the language comprehension part of the experiment, subjects were auditorily presented with the same kinds of utterances and judged whether they matched the visual scenes. We were able to replicate the previous finding of a region in caudal Broca s area that is sensitive to the complexity of
    syntactic encoding in language production. In language comprehension, no hemodynamic activation differences due to syntactic complexity were found. Given that correct performance in the judgment task did not require syntactic processing of the auditory stimuli, the results suggest that the degree to which listeners recruit syntactic processing resources in language comprehension may be a function of the syntactic demands of the task or the stimulus material.
  • Indefrey, P., & Gullberg, M. (2010). Foreword. Language Learning, 60(S2), v. doi:10.1111/j.1467-9922.2010.00596.x.

    Abstract

    The articles in this volume are the result of an invited conference entitled "The Earliest Stages of Language Learning" held at the Max Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, in October 2009.
  • Indefrey, P., & Gullberg, M. (2010). The earliest stages of language learning: Introduction. Language Learning, 60(S2), 1-4. doi:10.1111/j.1467-9922.2010.00597.x.
  • Indefrey, P., & Gullberg, M. (2010). The earliest stages of language learning: Introduction. In M. Gullberg, & P. Indefrey (Eds.), The earliest stages of language learning (pp. 1-4). Malden, MA: Wiley-Blackwell.
  • Indefrey, P., Sahin, H., & Gullberg, M. (2017). The expression of spatial relationships in Turkish-Dutch bilinguals. Bilingualism: Language and Cognition, 20(3), 473-493. doi:10.1017/S1366728915000875.

    Abstract

    We investigated how two groups of Turkish-Dutch bilinguals and two groups of monolingual speakers of the two languages described static topological relations. The bilingual groups differed with respect to their first (L1) and second (L2) language proficiencies and a number of sociolinguistic factors. Using an elicitation tool that covers a wide range of topological relations, we first assessed the extensions of different spatial expressions (topological relation markers, TRMs) in the Turkish and Dutch spoken by monolingual speakers. We then assessed differences in the use of TRMs between the two bilingual groups and monolingual speakers. In both bilingual groups, differences compared to monolingual speakers were mainly observed for Turkish. Dutch-dominant bilinguals showed enhanced congruence between translation-equivalent Turkish and Dutch TRMs. Turkish-dominant bilinguals extended the use of a topologically neutral locative marker. Our results can be interpreted as showing different bilingual optimization strategies (Muysken, 2013) in bilingual speakers who live in the same environment but differ with respect to L2 onset, L2 proficiency, and perceived importance of the L1.
  • Ingason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J. and 20 moreIngason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Walshe, M., Vassos, E., Di Forti, M., Murray, R., Bonetto, C., Tosato, S., Cantor, R. M., Rietschel, M., Craddock, N., Owen, M. J., Andreassen, O. A., Nothen, M. M., Peltonen, L., St. Clair, D., Ophoff, R. A., O’Donovan, M. C., Collier, D. A., Werge, T., & Rujescu, D. (2010). A large replication study and meta-analysis in European samples provides further support for association of AHI1 markers with schizophrenia. Human Molecular Genetics, 19(7), 1379-1386. doi:10.1093/hmg/ddq009.

    Abstract

    The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample. Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both the replication study of new samples and the meta-analysis show evidence for significant overrepresentation of all tested alleles in patients compared with controls (meta-analysis; P = 8.2 x 10(-5)-1.7 x 10(-3), common OR = 1.09-1.11). The region contains two genes, AHI1 and C6orf217, and both genes-as well as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia.
  • Isbilen, E. S., McCauley, S. M., Kidd, E., & Christiansen, M. H. (2017). Testing statistical learning implicitly: A novel chunk-based measure of statistical learning. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 564-569). Austin, TX: Cognitive Science Society.

    Abstract

    Attempts to connect individual differences in statistical learning with broader aspects of cognition have received considerable attention, but have yielded mixed results. A possible explanation is that statistical learning is typically tested using the two-alternative forced choice (2AFC) task. As a meta-cognitive task relying on explicit familiarity judgments, 2AFC may not accurately capture implicitly formed statistical computations. In this paper, we adapt the classic serial-recall memory paradigm to implicitly test statistical learning in a statistically-induced chunking recall (SICR) task. We hypothesized that artificial language exposure would lead subjects to chunk recurring statistical patterns, facilitating recall of words from the input. Experiment 1 demonstrates that SICR offers more fine-grained insights into individual differences in statistical learning than 2AFC. Experiment 2 shows that SICR has higher test-retest reliability than that reported for 2AFC. Thus, SICR offers a more sensitive measure of individual differences, suggesting that basic chunking abilities may explain statistical learning.
  • Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F. M., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727-741. doi:10.1162/089892904970708.

    Abstract

    In order to separate the cognitive processes associated with phonological encoding and the use of a visual word form lexicon in reading, it is desirable to compare the processing of words presented in a visually familiar form with words in a visually unfamiliar form. Japanese Kana orthography offers this possibility. Two phonologically equivalent but visually dissimilar syllabaries allow the writing of, for example, foreign loanwords in two ways, only one of which is visually familiar. Familiarly written words, unfamiliarly written words, and pseudowords were presented in both Kana syllabaries (yielding six conditions in total) to participants during an fMRI measurement with a silent articulation task (Experiment 1) and a phonological lexical decision task (Experiment 2) using an event-related design. Consistent over two experimental tasks, the three different stimulus types (familiar, unfamiliar, and pseudoword) were found to activate selectively different brain regions previously associated with phonological encoding and word retrieval or meaning. Compatible with the predictions of the dual-route model for reading, pseudowords and visually unfamiliar words, which have to be read using phonological assembly, caused an increase in brain activity in left inferior frontal regions (BA 44/47), as compared to visually familiar words. Visually familiar and unfamiliar words were found to activate a range of areas associated with lexico-semantic processing more strongly than pseudowords, such as the left and right temporo-parietal region (BA 39/40), a region in the left middle/inferior temporal gyrus (BA 20/21), and the posterior cingulate (BA 31).
  • Ito, A., Martin, A. E., & Nieuwland, M. S. (2017). How robust are prediction effects in language comprehension? Failure to replicate article-elicited N400 effects. Language, Cognition and Neuroscience, 32, 954-965. doi:10.1080/23273798.2016.1242761.

    Abstract

    Current psycholinguistic theory proffers prediction as a central, explanatory mechanism in language
    processing. However, widely-replicated prediction effects may not mean that prediction is
    necessary in language processing. As a case in point, C. D. Martin et al. [2013. Bilinguals reading
    in their second language do not predict upcoming words as native readers do.
    Journal of
    Memory and Language, 69
    (4), 574

    588. doi:10.1016/j.jml.2013.08.001] reported ERP evidence for
    prediction in native- but not in non-native speakers. Articles mismatching an expected noun
    elicited larger negativity in the N400 time window compared to articles matching the expected
    noun in native speakers only. We attempted to replicate these findings, but found no evidence
    for prediction irrespective of language nativeness. We argue that pre-activation of phonological
    form of upcoming nouns, as evidenced in article-elicited effects, may not be a robust
    phenomenon. A view of prediction as a necessary computation in language comprehension
    must be re-evaluated.
  • Ito, A., Martin, A. E., & Nieuwland, M. S. (2017). On predicting form and meaning in a second language. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(4), 635-652. doi:10.1037/xlm0000315.

    Abstract

    We used event-related potentials (ERP) to investigate whether Spanish−English bilinguals preactivate form and meaning of predictable words. Participants read high-cloze sentence contexts (e.g., “The student is going to the library to borrow a . . .”), followed by the predictable word (book), a word that was form-related (hook) or semantically related (page) to the predictable word, or an unrelated word (sofa). Word stimulus onset synchrony (SOA) was 500 ms (Experiment 1) or 700 ms (Experiment 2). In both experiments, all nonpredictable words elicited classic N400 effects. Form-related and unrelated words elicited similar N400 effects. Semantically related words elicited smaller N400s than unrelated words, which however, did not depend on cloze value of the predictable word. Thus, we found no N400 evidence for preactivation of form or meaning at either SOA, unlike native-speaker results (Ito, Corley et al., 2016). However, non-native speakers did show the post-N400 posterior positivity (LPC effect) for form-related words like native speakers, but only at the slower SOA. This LPC effect increased gradually with cloze value of the predictable word. We do not interpret this effect as necessarily demonstrating prediction, but rather as evincing combined effects of top-down activation (contextual meaning) and bottom-up activation (form similarity) that result in activation of unseen words that fit the context well, thereby leading to an interpretation conflict reflected in the LPC. Although there was no evidence that non-native speakers preactivate form or meaning, non-native speakers nonetheless appear to use bottom-up and top-down information to constrain incremental interpretation much like native speakers do.
  • Ito, A., Martin, A. E., & Nieuwland, M. S. (2017). Why the A/AN prediction effect may be hard to replicate: A rebuttal to DeLong, Urbach & Kutas (2017). Language, Cognition and Neuroscience, 32(8), 974-983. doi:10.1080/23273798.2017.1323112.
  • Jackson, C., & Roberts, L. (2010). Animacy affects the processing of subject–object ambiguities in the second language: Evidence from self-paced reading with German second language learners of Dutch. Applied Psycholinguistics, 31(4), 671-691. doi:10.1017/S0142716410000196.

    Abstract

    The results of a self-paced reading study with German second language (L2) learners of Dutch showed that noun animacy affected the learners' on-line commitments when comprehending relative clauses in their L2. Earlier research has found that German L2 learners of Dutch do not show an on-line preference for subject–object word order in temporarily ambiguous relative clauses when no disambiguating material is available prior to the auxiliary verb. We investigated whether manipulating the animacy of the ambiguous noun phrases would push the learners to make an on-line commitment to either a subject- or object-first analysis. Results showed they performed like Dutch native speakers in that their reading times reflected an interaction between topichood and animacy in the on-line assignment of grammatical roles
  • Yu, X., Janse, E., & Schoonen, R. (2021). The effect of learning context on L2 listening development. Studies in Second Language Acquisition, 43(2), 329-354. doi:10.1017/S0272263120000534.

    Abstract

    Little research has been done on the effect of learning context on L2 listening development. Motivated by DeKeyser’s (2015) skill acquisition theory of second language acquisition, this study compares L2 listening development in study abroad (SA) and at home (AH) contexts from both language knowledge and processing perspectives. One hundred forty-nine Chinese postgraduates studying in either China or the United Kingdom participated in a battery of listening tasks at the beginning and at the end of an academic year. These tasks measure auditory vocabulary knowledge and listening processing efficiency (i.e., accuracy, speed, and stability of processing) in word recognition, grammatical processing, and semantic analysis. Results show that, provided equal starting levels, the SA learners made more progress than the AH learners in speed of processing across the language processing tasks, with less clear results for vocabulary acquisition. Studying abroad may be an effective intervention for L2 learning, especially in terms of processing speed.
  • Yu, X., Janse, E., & Schoonen, R. (2021). The effect of learning context on L2 listening development: Knowledge and processing. Studies in Second Language Acquisition, 43, 329-354. doi:10.1017/S0272263120000534.

    Abstract

    Little research has been done on the effect of learning context on L2 listening development. Motivated by DeKeyser’s (2015) skill acquisition theory of second language acquisition, this study compares L2 listening development in study abroad (SA) and at home (AH) contexts from both language knowledge and processing perspectives. One hundred forty-nine Chinese postgraduates studying in either China or the United Kingdom participated in a battery of listening tasks at the beginning and at the end of an academic year. These tasks measure auditory vocabulary knowledge and listening processing efficiency (i.e., accuracy, speed, and stability of processing) in word recognition, grammatical processing, and semantic analysis. Results show that, provided equal starting levels, the SA learners made more progress than the AH learners in speed of processing across the language processing tasks, with less clear results for vocabulary acquisition. Studying abroad may be an effective intervention for L2 learning, especially in terms of processing speed.
  • Janse, E., & Klitsch, J. (2004). Auditieve perceptie bij gezonde sprekers en bij sprekers met verworven taalstoornissen. Afasiologie, 26(1), 2-6.
  • Janse, E., De Bree, E., & Brouwer, S. (2010). Decreased sensitivity to phonemic mismatch in spoken word processing in adult developmental dyslexia. Journal of Psycholinguistic Research, 39(6), 523-539. doi:10.1007/s10936-010-9150-2.

    Abstract

    Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as procodile for crocodile) for the atypical population of dyslexic adults to see to what extent mismatching phonemic information affects lexical activation in the face of overwhelming support for one specific lexical candidate. Results of an auditory lexical decision task showed that sensitivity to phonemic mismatch was less in the dyslexic population, compared to the respective control group. However, the dyslexic participants were outperformed by their controls only for word-initial mismatches. It is argued that a subtle speech decoding deficit affects lexical activation levels and makes spoken word processing less robust against distortion.
  • Janse, E. (2010). Spoken word processing and the effect of phonemic mismatch in aphasia. Aphasiology, 24(1), 3-27. doi:10.1080/02687030802339997.

    Abstract

    Background: There is evidence that, unlike in typical populations, initial lexical activation upon hearing spoken words in aphasic patients is not a direct reflection of the goodness of fit between the presented stimulus and the intended target. Earlier studies have mainly used short monosyllabic target words. Short words are relatively difficult to recognise because they are not highly redundant: changing one phoneme will often result in a (similar-sounding) different word. Aims: The present study aimed to investigate sensitivity of the lexical recognition system in aphasia. The focus was on longer words that contain more redundancy, to investigate whether aphasic adults might be impaired in deactivation of strongly activated lexical candidates. This was done by studying lexical activation upon presentation of spoken polysyllabic pseudowords (such as procodile) to see to what extent mismatching phonemic information leads to deactivation in the face of overwhelming support for one specific lexical candidate. Methods & Procedures: Speeded auditory lexical decision was used to investigate response time and accuracy to pseudowords with a word-initial or word-final phonemic mismatch in 21 aphasic patients and in an age-matched control group. Outcomes & Results: Results of an auditory lexical decision task showed that aphasic participants were less sensitive to phonemic mismatch if there was strong evidence for one particular lexical candidate, compared to the control group. Classifications of patients as Broca's vs Wernicke's or as fluent vs non-fluent did not reveal differences in sensitivity to mismatch between aphasia types. There was no reliable relationship between measures of auditory verbal short-term memory and lexical decision performance. Conclusions: It is argued that the aphasic results can best be viewed as lexical “overactivation” and that a verbal short-term memory account is less appropriate.
  • Janse, E., & Andringa, S. J. (2021). The roles of cognitive abilities and hearing acuity in older adults’ recognition of words taken from fast and spectrally reduced speech. Applied Psycholinguistics, 42(3), 763-790. doi:10.1017/S0142716421000047.

    Abstract

    Previous literature has identified several cognitive abilities as predictors of individual differences in speech perception. Working memory was chief among them, but effects have also been found for processing speed. Most research has been conducted on speech in noise, but fast and unclear articulation also makes listening challenging, particularly for older listeners. As a first step toward specifying the cognitive mechanisms underlying spoken word recognition, we set up this study to determine which factors explain unique variation in word identification accuracy in fast speech, and the extent to which this was affected by further degradation of the speech signal. To that end, 105 older adults were tested on identification accuracy of fast words in unaltered and degraded conditions in which the speech stimuli were low-pass filtered. They were also tested on processing speed, memory, vocabulary knowledge, and hearing sensitivity. A structural equation analysis showed that only memory and hearing sensitivity explained unique variance in word recognition in both listening conditions. Working memory was more strongly associated with performance in the unfiltered than in the filtered condition. These results suggest that memory skills, rather than speed, facilitate the mapping of single words onto stored lexical representations, particularly in conditions of medium difficulty.
  • Janse, E. (2004). Word perception in fast speech: Artificially time-compressed vs. naturally produced fast speech. Speech Communication, 42, 155-173. doi:10.1016/j.specom.2003.07.001.

    Abstract

    Natural fast speech differs from normal-rate speech with respect to its temporal pattern. Previous results showed that word intelligibility of heavily artificially time-compressed speech could not be improved by making its temporal pattern more similar to that of natural fast speech. This might have been due to the extrapolation of timing rules for natural fast speech to rates that are much faster than can be attained by human speakers. The present study investigates whether, at a speech rate that human speakers can attain, artificially time-compressed speech is easier to process if its timing pattern is similar to that of naturally produced fast speech. Our first experiment suggests, however, that word processing speed was slowed down, relative to linear compression. In a second experiment, word processing of artificially time-compressed speech was compared with processing of naturally produced fast speech. Even when naturally produced fast speech is perfectly intelligible, its less careful articulation, combined with the changed timing pattern, slows down processing, relative to linearly time-compressed speech. Furthermore, listeners preferred artificially time-compressed speech over naturally produced fast speech. These results suggest that linearly time-compressed speech has both a temporal and a segmental advantage over natural fast speech.
  • Jansen, N. A., Braden, R. O., Srivastava, S., Otness, E. F., Lesca, G., Rossi, M., Nizon, M., Bernier, R. A., Quelin, C., Van Haeringen, A., Kleefstra, T., Wong, M. M. K., Whalen, S., Fisher, S. E., Morgan, A. T., & Van Bon, B. W. (2021). Clinical delineation of SETBP1 haploinsufficiency disorder. European Journal of Human Genetics, 29, 1198 -1205. doi:10.1038/s41431-021-00888-9.

    Abstract

    SETBP1 haploinsufficiency disorder (MIM#616078) is caused by haploinsufficiency of SETBP1 on chromosome 18q12.3, but there has not yet been any systematic evaluation of the major features of this monogenic syndrome, assessing penetrance and expressivity. We describe the first comprehensive study to delineate the associated clinical phenotype, with findings from 34 individuals, including 24 novel cases, all of whom have a SETBP1 loss-of-function variant or single (coding) gene deletion, confirmed by molecular diagnostics. The most commonly reported clinical features included mild motor developmental delay, speech impairment, intellectual disability, hypotonia, vision impairment, attention/concentration deficits, and hyperactivity. Although there is a mild overlap in certain facial features, the disorder does not lead to a distinctive recognizable facial gestalt. As well as providing insight into the clinical spectrum of SETBP1 haploinsufficiency disorder, this reports puts forward care recommendations for patient management.

    Additional information

    supplementary table
  • Jansma, B. M., & Schiller, N. O. (2004). Monitoring syllable boundaries during speech production. Brain and Language, 90(1-3), 311-317. doi:10.1016/S0093-934X(03)00443-7.

    Abstract

    This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a selfmonitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No canoe vs. kaN.sel pulpit ; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt 's model of phonological encoding.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2004). Stem complexity and inflectional encoding in language production. Journal of Psycholinguistic Research, 33(5), 365-381. doi:10.1023/B:JOPR.0000039546.60121.a8.

    Abstract

    Three experiments are reported that examined whether stem complexity plays a role in inflecting polymorphemic words in language production. Experiment 1 showed that preparation effects for words with polymorphemic stems are larger when they are produced among words with constant inflectional structures compared to words with variable inflectional structures and simple stems. This replicates earlier findings for words with monomorphemic stems (Janssen et al., 2002). Experiments 2 and 3 showed that when inflectional structure is held constant, the preparation effects are equally large with simple and compound stems, and with compound and complex adjectival stems. These results indicate that inflectional encoding is blind to the complexity of the stem, which suggests that specific inflectional rather than generic morphological frames guide the generation of inflected forms in speaking words.
  • Janssen, J., Díaz-Caneja, C. M., Alloza, C., Schippers, A., De Hoyos, L., Santonja, J., Gordaliza, P. M., Buimer, E. E. L., van Haren, N. E. M., Cahn, W., Arango, C., Kahn, R. S., Hulshoff Pol, H. E., & Schnack, H. G. (2021). Dissimilarity in sulcal width patterns in the cortex can be used to identify patients with schizophrenia with extreme deficits in cognitive performance. Schizophrenia Bulletin, 47(2), 552-561. doi:10.1093/schbul/sbaa131.

    Abstract

    Schizophrenia is a biologically complex disorder with multiple regional deficits in cortical brain morphology. In addition, interindividual heterogeneity of cortical morphological metrics is larger in patients with schizophrenia when compared to healthy controls. Exploiting interindividual differences in the severity of cortical morphological deficits in patients instead of focusing on group averages may aid in detecting biologically informed homogeneous subgroups. The person-based similarity index (PBSI) of brain morphology indexes an individual’s morphometric similarity across numerous cortical regions amongst a sample of healthy subjects. We extended the PBSI such that it indexes the morphometric similarity of an independent individual (eg, a patient) with respect to healthy control subjects. By employing a normative modeling approach on longitudinal data, we determined an individual’s degree of morphometric dissimilarity to the norm. We calculated the PBSI for sulcal width (PBSI-SW) in patients with schizophrenia and healthy control subjects (164 patients and 164 healthy controls; 656 magnetic resonance imaging scans) and associated it with cognitive performance and cortical sulcation index. A subgroup of patients with markedly deviant PBSI-SW showed extreme deficits in cognitive performance and cortical sulcation. Progressive reduction of PBSI-SW in the schizophrenia group relative to healthy controls was driven by these deviating individuals. By explicitly leveraging interindividual differences in the severity of PBSI-SW deficits, neuroimaging-driven subgrouping of patients is feasible. As such, our results pave the way for future applications of morphometric similarity indices for subtyping of clinical populations.

    Files private

    Request files
  • Janssen, C., Segers, E., McQueen, J. M., & Verhoeven, L. (2017). Transfer from implicit to explicit phonological abilities in first and second language learners. Bilingualism: Language and Cognition, 20(4), 795-812. doi:10.1017/S1366728916000523.

    Abstract

    Children's abilities to process the phonological structure of words are important predictors of their literacy development. In the current study, we examined the interrelatedness between implicit (i.e., speech decoding) and explicit (i.e., phonological awareness) phonological abilities, and especially the role therein of lexical specificity (i.e., the ability to learn to recognize spoken words based on only minimal acoustic-phonetic differences). We tested 75 Dutch monolingual and 64 Turkish–Dutch bilingual kindergartners. SEM analyses showed that speech decoding predicted lexical specificity, which in turn predicted rhyme awareness in the first language learners but phoneme awareness in the second language learners. Moreover, in the latter group there was an impact of the second language: Dutch speech decoding and lexical specificity predicted Turkish phonological awareness, which in turn predicted Dutch phonological awareness. We conclude that language-specific phonological characteristics underlie different patterns of transfer from implicit to explicit phonological abilities in first and second language learners.
  • Janzen, G., & Van Turennout, M. (2004). Selective neural representation of objects relevant for navigation. Nature Neuroscience, 7(6), 673-677. doi:10.1038/nn1257.

    Abstract

    As people find their way through their environment, objects at navigationally relevant locations can serve as crucial landmarks. The parahippocampal gyrus has previously been shown to be involved in object and scene recognition. In the present study, we investigated the neural representation of navigationally relevant locations. Healthy human adults viewed a route through a virtual museum with objects placed at intersections (decision points) or at simple turns (non-decision points). Event-related functional magnetic resonance imaging (fMRI) data were acquired during subsequent recognition of the objects in isolation. Neural activity in the parahippocampal gyrus reflected the navigational relevance of an object's location in the museum. Parahippocampal responses were selectively increased for objects that occurred at decision points, independent of attentional demands. This increase occurred for forgotten as well as remembered objects, showing implicit retrieval of navigational information. The automatic storage of relevant object location in the parahippocampal gyrus provides a part of the neural mechanism underlying successful navigation.
  • Janzen, G., & Weststeijn, C. (2004). Neural representation of object location and route direction: An fMRI study. NeuroImage, 22(Supplement 1), e634-e635.
  • Janzen, G., & Van Turennout, M. (2004). Neuronale Markierung navigationsrelevanter Objekte im räumlichen Gedächtnis: Ein fMRT Experiment. In D. Kerzel (Ed.), Beiträge zur 46. Tagung experimentell arbeitender Psychologen (pp. 125-125). Lengerich: Pabst Science Publishers.
  • Jara-Ettinger, J., & Rubio-Fernández, P. (2021). Quantitative mental state attributions in language understanding. Science Advances, 7: eabj0970. doi:10.1126/sciadv.abj0970.

    Abstract

    Human social intelligence relies on our ability to infer other people’s mental states such as their beliefs, desires,and intentions. While people are proficient at mental state inference from physical action, it is unknown whether people can make inferences of comparable granularity from simple linguistic events. Here, we show that people can make quantitative mental state attributions from simple referential expressions, replicating the fine-grained inferential structure characteristic of nonlinguistic theory of mind. Moreover, people quantitatively adjust these inferences after brief exposures to speaker-specific speech patterns. These judgments matched the predictions made by our computational model of theory of mind in language, but could not be explained by a simpler qualitative model that attributes mental states deductively. Our findings show how the connection between language and theory of mind runs deep, with their interaction showing in one of the most fundamental forms of human communication: reference.

    Additional information

    https://osf.io/h8qfy/
  • Järvikivi, J., & Pyykkönen, P. (2010). Lauseiden ymmärtäminen [Engl. Sentence comprehension]. In P. Korpilahti, O. Aaltonen, & M. Laine (Eds.), Kieli ja aivot: Kommunikaation perusteet, häiriöt ja kuntoutus (pp. 117-125). Turku: Turku yliopisto.

    Abstract

    Kun kuuntelemme puhetta tai luemme tekstiä, alamme välittömästi rakentaa koherenttia tulkintaa. Toisin kuin lukemisessa, puheen havaitsemisessa kuulija voi harvoin kontrolloida nopeutta, jolla hänelle puhutaan. Huolimatta hyvin nopeasta syötteestä - noin 4-7 tavua sekunnissa - ihmiset kykenevät tulkitsemaan puhetta hyvin vaivattomasti. Lauseen ymmärtämisen tutkimuksessa selvitetäänkin, miten tällainen nopea ja useimmiten vaivaton tulkintaprosessi tapahtuu, mitkä kognitiiviset prosessit osallistuvat reaaliaikaiseen tulkintaan ja millaista informaatiota missäkin vaiheessa prosessointia ihminen käyttää hyväkseen johdonmukaisen tulkinnan muodostamiseksi. Tämä kappale on katsaus lauseen ymmärtämisen prosesseihin ja niiden tutkimukseen. Käsittelemme lyhyesti prosessointimalleja, aikuisten ja lasten kielen suhdetta, lauseen sisäisten ja välisten viittaussuhteiden tulkintaa ja sensorisen ympäristön sekä motorisen toiminnan roolia lauseiden tulkintaprosessissa.
  • Järvikivi, J., Vainio, M., & Aalto, D. (2010). Real-time correlates of phonological quantity reveal unity of tonal and non-tonal languages. Plos One, 5(9), e12603. doi:10.1371/journal.pone.0012603.

    Abstract

    Discrete phonological phenomena form our conscious experience of language: continuous changes in pitch appear as distinct tones to the speakers of tone languages, whereas the speakers of quantity languages experience duration categorically. The categorical nature of our linguistic experience is directly reflected in the traditionally clear-cut linguistic classification of languages into tonal or non-tonal. However, some evidence suggests that duration and pitch are fundamentally interconnected and co-vary in signaling word meaning in non-tonal languages as well. We show that pitch information affects real-time language processing in a (non-tonal) quantity language. The results suggest that there is no unidirectional causal link from a genetically-based perceptual sensitivity towards pitch information to the appearance of a tone language. They further suggest that the contrastive categories tone and quantity may be based on simultaneously co-varying properties of the speech signal and the processing system, even though the conscious experience of the speakers may highlight only one discrete variable at a time.
  • Jasmin, K., & Casasanto, D. (2010). Stereotyping: How the QWERTY keyboard shapes the mental lexicon [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 159). York: University of York.
  • Jaspers, D., Klooster, W., Putseys, Y., & Seuren, P. A. M. (Eds.). (1989). Sentential complementation and the lexicon: Studies in honour of Wim de Geest. Dordrecht: Foris.
  • Jeltema, H., Ohlerth, A.-K., de Wit, A., Wagemakers, M., Rofes, A., Bastiaanse, R., & Drost, G. (2021). Comparing navigated transcranial magnetic stimulation mapping and "gold standard" direct cortical stimulation mapping in neurosurgery: a systematic review. Neurosurgical Review, (4), 1903-1920. doi:10.1007/s10143-020-01397-x.

    Abstract

    The objective of this systematic review is to create an overview of the literature on the comparison of navigated transcranial magnetic stimulation (nTMS) as a mapping tool to the current gold standard, which is (intraoperative) direct cortical stimulation (DCS) mapping. A search in the databases of PubMed, EMBASE, and Web of Science was performed. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and recommendations were used. Thirty-five publications were included in the review, describing a total of 552 patients. All studies concerned either mapping of motor or language function. No comparative data for nTMS and DCS for other neurological functions were found. For motor mapping, the distances between the cortical representation of the different muscle groups identified by nTMS and DCS varied between 2 and 16 mm. Regarding mapping of language function, solely an object naming task was performed in the comparative studies on nTMS and DCS. Sensitivity and specificity ranged from 10 to 100% and 13.3–98%, respectively, when nTMS language mapping was compared with DCS mapping. The positive predictive value (PPV) and negative predictive value (NPV) ranged from 17 to 75% and 57–100% respectively. The available evidence for nTMS as a mapping modality for motor and language function is discussed.
  • Jesse, A., Reinisch, E., & Nygaard, L. C. (2010). Learning of adjectival word meaning through tone of voice [Abstract]. Journal of the Acoustical Society of America, 128, 2475.

    Abstract

    Speakers express word meaning through systematic but non-canonical acoustic variation of tone of voice (ToV), i.e., variation of speaking rate, pitch, vocal effort, or loudness. Words are, for example, pronounced at a higher pitch when referring to small than to big referents. In the present study, we examined whether listeners can use ToV to learn the meaning of novel adjectives (e.g., “blicket”). During training, participants heard sentences such as “Can you find the blicket one?” spoken with ToV representing hot-cold, strong-weak, and big-small. Participants’ eye movements to two simultaneously shown objects with properties representing the relevant two endpoints (e.g., an elephant and an ant for big-small) were monitored. Assignment of novel adjectives to endpoints was counterbalanced across participants. During test, participants heard the sentences spoken with a neutral ToV, while seeing old or novel picture pairs varying along the same dimensions (e.g., a truck and a car for big-small). Participants had to click on the adjective’s referent. As evident from eye movements, participants did not infer the intended meaning during first exposure, but learned the meaning with the help of ToV during training. At test listeners applied this knowledge to old and novel items even in the absence of informative ToV.
  • Jesse, A., & Massaro, D. W. (2010). Seeing a singer helps comprehension of the song's lyrics. Psychonomic Bulletin & Review, 17, 323-328.

    Abstract

    When listening to speech, we often benefit when also seeing the speaker talk. If this benefit is not domain-specific for speech, then the recognition of sung lyrics should likewise benefit from seeing the singer. Nevertheless, previous research failed to obtain a substantial improvement in that domain. Our study shows that this failure was not due to inherent differences between singing and speaking but rather to less informative visual presentations. By presenting a professional singer, we found a substantial audiovisual benefit of about 35% improvement for lyrics recognition. This benefit was further robust across participants, phrases, and repetition of the test materials. Our results provide the first evidence that lyrics recognition just like speech and music perception is a multimodal process.
  • Jesse, A., & Massaro, D. W. (2010). The temporal distribution of information in audiovisual spoken-word identification. Attention, Perception & Psychophysics, 72(1), 209-225. doi:10.3758/APP.72.1.209.

    Abstract

    In the present study, we examined the distribution and processing of information over time in auditory and visual speech as it is used in unimodal and bimodal word recognition. English consonant—vowel—consonant words representing all possible initial consonants were presented as auditory, visual, or audiovisual speech in a gating task. The distribution of information over time varied across and within features. Visual speech information was generally fully available early during the phoneme, whereas auditory information was still accumulated. An audiovisual benefit was therefore already found early during the phoneme. The nature of the audiovisual recognition benefit changed, however, as more of the phoneme was presented. More features benefited at short gates rather than at longer ones. Visual speech information plays, therefore, a more important role early during the phoneme rather than later. The results of the study showed the complex interplay of information across modalities and time, since this is essential in determining the time course of audiovisual spoken-word recognition.
  • Johns, T. G., Perera, R. M., Vitali, A. A., Vernes, S. C., & Scott, A. (2004). Phosphorylation of a glioma-specific mutation of the EGFR [Abstract]. Neuro-Oncology, 6, 317.

    Abstract

    Mutations of the epidermal growth factor receptor (EGFR) gene are found at a relatively high frequency in glioma, with the most common being the de2-7 EGFR (or EGFRvIII). This mutation arises from an in-frame deletion of exons 2-7, which removes 267 amino acids from the extracellular domain of the receptor. Despite being unable to bind ligand, the de2-7 EGFR is constitutively active at a low level. Transfection of human glioma cells with the de2-7 EGFR has little effect in vitro, but when grown as tumor xenografts this mutated receptor imparts a dramatic growth advantage. We mapped the phosphorylation pattern of de2-7 EGFR, both in vivo and in vitro, using a panel of antibodies specific for different phosphorylated tyrosine residues. Phosphorylation of de2-7 EGFR was detected constitutively at all tyrosine sites surveyed in vitro and in vivo, including tyrosine 845, a known target in the wild-type EGFR for src kinase. There was a substantial upregulation of phosphorylation at every yrosine residue of the de2-7 EGFR when cells were grown in vivo compared to the receptor isolated from cells cultured in vitro. Upregulation of phosphorylation at tyrosine 845 could be stimulated in vitro by the addition of specific components of the ECM via an integrindependent mechanism. These observations may partially explain why the growth enhancement mediated by de2-7 EGFR is largely restricted to the in vivo environment
  • Johnson, E. K., & Tyler, M. (2010). Testing the limits of statistical learning for word segmentation. Developmental Science, 13, 339-345. doi:10.1111/j.1467-7687.2009.00886.x.

    Abstract

    Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants’ ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5- and 8-month-olds’ ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5-month-olds are extremely sensitive to the conditional probabilities in their environment. However, either age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants’ statistical learning abilities may not be as robust as earlier studies have suggested.
  • Jones, G., & Rowland, C. F. (2017). Diversity not quantity in caregiver speech: Using computational modeling to isolate the effects of the quantity and the diversity of the input on vocabulary growth. Cognitive Psychology, 98, 1-21. doi:10.1016/j.cogpsych.2017.07.002.

    Abstract

    Children who hear large amounts of diverse speech learn language more quickly than children who do not. However, high correlations between the amount and the diversity of the input in speech samples makes it difficult to isolate the influence of each. We overcame this problem by controlling the input to a computational model so that amount of exposure to linguistic input (quantity) and the quality of that input (lexical diversity) were independently manipulated. Sublexical, lexical, and multi-word knowledge were charted across development (Study 1), showing that while input quantity may be important early in learning, lexical diversity is ultimately more crucial, a prediction confirmed against children’s data (Study 2). The model trained on a lexically diverse input also performed better on nonword repetition and sentence recall tests (Study 3) and was quicker to learn new words over time (Study 4). A language input that is rich in lexical diversity outperforms equivalent richness in quantity for learned sublexical and lexical knowledge, for well-established language tests, and for acquiring words that have never been encountered before.
  • Jones, G., Cabiddu, F., Andrews, M., & Rowland, C. F. (2021). Chunks of phonological knowledge play a significant role in children’s word learning and explain effects of neighborhood size, phonotactic probability, word frequency and word length. Journal of Memory and Language, 119: 104232. doi:10.1016/j.jml.2021.104232.

    Abstract

    A key omission from many accounts of children’s early word learning is the linguistic knowledge that the child has acquired up to the point when learning occurs. We simulate this knowledge using a computational model that learns phoneme and word sequence knowledge from naturalistic language corpora. We show how this simple model is able to account for effects of word length, word frequency, neighborhood density and phonotactic probability on children’s early word learning. Moreover, we show how effects of neighborhood density and phonotactic probability on word learning are largely influenced by word length, with our model being able to capture all effects. We then use predictions from the model to show how the ease by which a child learns a new word from maternal input is directly influenced by the phonological knowledge that the child has acquired from other words up to the point of encountering the new word. There are major implications of this work: models and theories of early word learning need to incorporate existing sublexical and lexical knowledge in explaining developmental change while well-established indices of word learning are rejected in favor of phonological knowledge of varying grain sizes.

    Additional information

    supplementary data research data
  • Jongman, S. R. (2017). Sustained attention ability affects simple picture naming. Collabra: Psychology, 3(1): 17. doi:10.1525/collabra.84.

    Abstract

    Sustained attention has previously been shown as a requirement for language production. However, this is mostly evident for difficult conditions, such as a dual-task situation. The current study provides corroborating evidence that this relationship holds even for simple picture naming. Sustained attention ability, indexed both by participants’ reaction times and individuals’ hit rate (the proportion of correctly detected targets) on a digit discrimination task, correlated with picture naming latencies. Individuals with poor sustained attention were consistently slower and their RT distributions were more positively skewed when naming pictures compared to individuals with better sustained attention. Additionally, the need to sustain attention was manipulated by changing the speed of stimulus presentation. Research has suggested that fast event rates tax sustained attention resources to a larger degree than slow event rates. However, in this study the fast event rate did not result in increased difficulty, neither for the picture naming task nor for the sustained attention task. Instead, the results point to a speed-accuracy trade-off in the sustained attention task (lower accuracy but faster responses in the fast than in the slow event rate), and to a benefit for faster rates in the picture naming task (shorter naming latencies with no difference in accuracy). Performance on both tasks was largely comparable, supporting previous findings that sustained attention is called upon during language production
  • Jongman, S. R., Roelofs, A., Scheper, A., & Meyer, A. S. (2017). Picture naming in typically developing and language impaired children: The role of sustained attention. International Journal of Language & Communication Disorders, 52(3), 323-333. doi:10.1111/1460-6984.12275.

    Abstract

    Children with specific language impairment (SLI) have problems not only with language performance but also with sustained attention, which is the ability to maintain alertness over an extended period of time. Although there is consensus that this ability is impaired with respect to processing stimuli in the auditory perceptual modality, conflicting evidence exists concerning the visual modality.
    Aims

    To address the outstanding issue whether the impairment in sustained attention is limited to the auditory domain, or if it is domain-general. Furthermore, to test whether children's sustained attention ability relates to their word-production skills.
    Methods & Procedures

    Groups of 7–9 year olds with SLI (N = 28) and typically developing (TD) children (N = 22) performed a picture-naming task and two sustained attention tasks, namely auditory and visual continuous performance tasks (CPTs).
    Outcomes & Results

    Children with SLI performed worse than TD children on picture naming and on both the auditory and visual CPTs. Moreover, performance on both the CPTs correlated with picture-naming latencies across developmental groups.
    Conclusions & Implications

    These results provide evidence for a deficit in both auditory and visual sustained attention in children with SLI. Moreover, the study indicates there is a relationship between domain-general sustained attention and picture-naming performance in both TD and language-impaired children. Future studies should establish whether this relationship is causal. If attention influences language, training of sustained attention may improve language production in children from both developmental groups.
  • Jongman, S. R., & Meyer, A. S. (2017). To plan or not to plan: Does planning for production remove facilitation from associative priming? Acta Psychologica, 181, 40-50. doi:10.1016/j.actpsy.2017.10.003.

    Abstract

    Theories of conversation propose that in order to have smooth transitions from one turn to the next, speakers already plan their response while listening to their interlocutor. Moreover, it has been argued that speakers align their linguistic representations (i.e. prime each other), thereby reducing the processing costs associated with concurrent listening and speaking. In two experiments, we assessed how identity and associative priming from spoken words onto picture naming were affected by a concurrent speech planning task. In a baseline (no name) condition, participants heard prime words that were identical, associatively related, or unrelated to target pictures presented two seconds after prime onset. Each prime was accompanied by a non-target picture and followed by its recorded name. The participant did not name the non-target picture. In the plan condition, the participants first named the non-target picture, instead of listening to the recording, and then the target. In Experiment 1, where the plan- and no-plan conditions were tested between participants, priming effects of equal strength were found in the plan and no-plan condition. In Experiment 2, where the two conditions were tested within participants, the identity priming effect was maintained, but the associative priming effect was only seen in the no-plan but not in the plan condition. In this experiment, participant had to decide at the onset of each trial whether or not to name the non-target picture, rendering the task more complex than in Experiment 1. These decision processes may have interfered with the processing of the primes. Thus, associative priming can take place during speech planning, but only if the cognitive load is not too high.
  • Jongman, S. R., Khoe, Y. H., & Hintz, F. (2021). Vocabulary size influences spontaneous speech in native language users: Validating the use of automatic speech recognition in individual differences research. Language and Speech, 64(1), 35-51. doi:10.1177/0023830920911079.

    Abstract

    Previous research has shown that vocabulary size affects performance on laboratory word production tasks. Individuals who know many words show faster lexical access and retrieve more words belonging to pre-specified categories than individuals who know fewer words. The present study examined the relationship between receptive vocabulary size and speaking skills as assessed in a natural sentence production task. We asked whether measures derived from spontaneous responses to every-day questions correlate with the size of participants’ vocabulary. Moreover, we assessed the suitability of automatic speech recognition for the analysis of participants’ responses in complex language production data. We found that vocabulary size predicted indices of spontaneous speech: Individuals with a larger vocabulary produced more words and had a higher speech-silence ratio compared to individuals with a smaller vocabulary. Importantly, these relationships were reliably identified using manual and automated transcription methods. Taken together, our results suggest that spontaneous speech elicitation is a useful method to investigate natural language production and that automatic speech recognition can alleviate the burden of labor-intensive speech transcription.
  • Jordan, F., & Dunn, M. (2010). Kin term diversity is the result of multilevel, historical processes [Comment on Doug Jones]. Behavioral and Brain Sciences, 33, 388. doi:10.1017/S0140525X10001962.

    Abstract

    Explanations in the domain of kinship can be sought on several different levels: Jones addresses online processing, as well as issues of origins and innateness. We argue that his framework can more usefully be applied at the levels of developmental and historical change, the latter especially. A phylogenetic approach to the diversity of kinship terminologies is most urgently required.
  • Jordens, P. (2004). Systematiek en dynamiek bij de verwerving van Finietheid. Toegepaste Taalwetenschap in Artikelen, 71, 9-22.

    Abstract

    In early Dutch learner varieties, there is no evidence of finiteness being a functional category. There is no V2nd: no correlation between inflectional morphology and movement. Initially, learners express the illocutive function of finiteness through the use of illocutive markers, with the non-use of an illocutive marker expressing the default illocutive function of assertion. Illocutive markers are functioning as adjuncts with scope over the predicate. Illocutive markers become re-analysed as functional elements.The driving force is the acquisition of the auxiliary verbs that occur with past participles. It leads to a reanalysis of illocutive markers as two separate elements: an auxiliary verb and a scope adverb. The (modal) auxiliary carries illocutive function. Lexical verb-argument structure (including the external argument) occurs within the domain of the auxiliary verb. The predicate as the focus constituent occurs within the domain of a scope adverb. This reanalysis establishes a position for the external argument within the domain of AUX. The acquisition of AUX causes the acquisition of a (hierarchical) structure with a complement as a constituent which represents an underlying verb-argument structure, a predicate as the domain of elements that are in focus, and an external (specifier) position as a landing site for elements with topic function.
  • Jordens, P., & Bittner, D. (2017). Developing interlanguage: Driving forces in children learning Dutch and German. IRAL, 55(4), 365-392. doi:10.1515/iral-2017-0147.

    Abstract

    Spontaneous language learning both in children learning their mother tongue and in adults learning a second language shows that language development proceeds in a stage-wise manner. Given that a developmental stage is defined as a coherent linguistic system, utterances of language learners can be accounted for in terms of what (Selinker, Larry. 1972. Interlanguage. International Review of Applied Linguistics 10. 209-231) referred to with the term Interlanguage. This paper is a study on the early interlanguage systems of children learning Dutch and German as their mother tongue. The present child learner systems, so it is claimed, are coherent lexical systems based on types of verb-argument structure that are either agentive (as in Dutch: kannie bal pakke 'cannot ball get', or German: mag nich nase putzen 'like not nose clean') or non-agentive (as in Dutch: popje valt bijna 'doll falls nearly', or in German: ente fällt 'duck falls'). At this lexical stage, functional morphology (e. g. morphological finiteness, tense), function words (e. g. auxiliary verbs, determiners) and word order variation are absent. For these typically developing children, both in Dutch and in German, it is claimed that developmental progress is driven by the acquisition of the formal properties of topicalization. It is, furthermore, argued that this feature seems to serve as the driving force in the instantiation of the functional, i. e. informational linguistic properties of the target-language system
  • Jordens, P. (2004). Morphology in Second Language Acquisition. In G. Booij (Ed.), Morphologie: Ein internationales Handbuch zur Flexion und Wortbildung (pp. 1806-1816). Berlin: Walter de Gruyter.
  • Junge, C., Hagoort, P., Kooijman, V., & Cutler, A. (2010). Brain potentials for word segmentation at seven months predict later language development. In K. Franich, K. M. Iserman, & L. L. Keil (Eds.), Proceedings of the 34th Annual Boston University Conference on Language Development. Volume 1 (pp. 209-220). Somerville, MA: Cascadilla Press.
  • Junge, C., Cutler, A., & Hagoort, P. (2010). Ability to segment words from speech as a precursor of later language development: Insights from electrophysiological responses in the infant brain. In M. Burgess, J. Davey, C. Don, & T. McMinn (Eds.), Proceedings of 20th International Congress on Acoustics, ICA 2010. Incorporating Proceedings of the 2010 annual conference of the Australian Acoustical Society (pp. 3727-3732). Australian Acoustical Society, NSW Division.
  • Kapatsinski, V., & Harmon, Z. (2017). A Hebbian account of entrenchment and (over)-extension in language learning. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Meeting of the Cognitive Science Society (CogSci 2017) (pp. 2366-2371). Austin, TX: Cognitive Science Society.

    Abstract

    In production, frequently used words are preferentially extended to new, though related meanings. In comprehension, frequent exposure to a word instead makes the learner confident that all of the word’s legitimate uses have been experienced, resulting in an entrenched form-meaning mapping between the word and its experienced meaning(s). This results in a perception-production dissociation, where the forms speakers are most likely to map onto a novel meaning are precisely the forms that they believe can never be used that way. At first glance, this result challenges the idea of bidirectional form-meaning mappings, assumed by all current approaches to linguistic theory. In this paper, we show that bidirectional form-meaning mappings are not in fact challenged by this production-perception dissociation. We show that the production-perception dissociation is expected even if learners of the lexicon acquire simple symmetrical form-meaning associations through simple Hebbian learning.
  • Kapteijns, B., & Hintz, F. (2021). Comparing predictors of sentence self-paced reading times: Syntactic complexity versus transitional probability metrics. PLoS One, 16(7): e0254546. doi:10.1371/journal.pone.0254546.

    Abstract

    When estimating the influence of sentence complexity on reading, researchers typically opt for one of two main approaches: Measuring syntactic complexity (SC) or transitional probability (TP). Comparisons of the predictive power of both approaches have yielded mixed results. To address this inconsistency, we conducted a self-paced reading experiment. Participants read sentences of varying syntactic complexity. From two alternatives, we selected the set of SC and TP measures, respectively, that provided the best fit to the self-paced reading data. We then compared the contributions of the SC and TP measures to reading times when entered into the same model. Our results showed that both measures explained significant portions of variance in self-paced reading times. Thus, researchers aiming to measure sentence complexity should take both SC and TP into account. All of the analyses were conducted with and without control variables known to influence reading times (word/sentence length, word frequency and word position) to showcase how the effects of SC and TP change in the presence of the control variables.

    Additional information

    supporting information
  • Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2021). Prediction in bilingual children: The missing piece of the puzzle. In E. Kaan, & T. Grüter (Eds.), Prediction in Second Language Processing and Learning (pp. 116-137). Amsterdam: Benjamins.

    Abstract

    A wealth of studies has shown that more proficient monolingual speakers are better at predicting upcoming information during language comprehension. Similarly, prediction skills of adult second language (L2) speakers in their L2 have also been argued to be modulated by their L2 proficiency. How exactly language proficiency and prediction are linked, however, is yet to be systematically investigated. One group of language users which has the potential to provide invaluable insights into this link is bilingual children. In this paper, we compare bilingual children’s prediction skills with those of monolingual children and adult L2 speakers, and show how investigating bilingual children’s prediction skills may contribute to our understanding of how predictive processing works.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Ozyurek, A. (2021). Spatial language use predicts spatial memory of children: Evidence from sign, speech, and speech-plus-gesture. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 672-678). Vienna: Cognitive Science Society.

    Abstract

    There is a strong relation between children’s exposure to
    spatial terms and their later memory accuracy. In the current
    study, we tested whether the production of spatial terms by
    children themselves predicts memory accuracy and whether
    and how language modality of these encodings modulates
    memory accuracy differently. Hearing child speakers of
    Turkish and deaf child signers of Turkish Sign Language
    described pictures of objects in various spatial relations to each
    other and later tested for their memory accuracy of these
    pictures in a surprise memory task. We found that having
    described the spatial relation between the objects predicted
    better memory accuracy. However, the modality of these
    descriptions in sign, speech, or speech-plus-gesture did not
    reveal differences in memory accuracy. We discuss the
    implications of these findings for the relation between spatial
    language, memory, and the modality of encoding.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2021). Effects and non-effects of late language exposure on spatial language development: Evidence from deaf adults and children. Language Learning and Development, 17(1), 1-25. doi:10.1080/15475441.2020.1823846.

    Abstract

    Late exposure to the first language, as in the case of deaf children with hearing parents, hinders the production of linguistic expressions, even in adulthood. Less is known about the development of language soon after language exposure and if late exposure hinders all domains of language in children and adults. We compared late signing adults and children (MAge = 8;5) 2 years after exposure to sign language, to their age-matched native signing peers in expressions of two types of locative relations that are acquired in certain cognitive-developmental order: view-independent (IN-ON-UNDER) and view-dependent (LEFT-RIGHT). Late signing children and adults differed from native signers in their use of linguistic devices for view-dependent relations but not for view-independent relations. These effects were also modulated by the morphological complexity. Hindering effects of late language exposure on the development of language in children and adults are not absolute but are modulated by cognitive and linguistic complexity.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2017). Effects of delayed language exposure on spatial language acquisition by signing children and adults. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 2372-2376). Austin, TX: Cognitive Science Society.

    Abstract

    Deaf children born to hearing parents are exposed to language input quite late, which has long-lasting effects on language production. Previous studies with deaf individuals mostly focused on linguistic expressions of motion events, which have several event components. We do not know if similar effects emerge in simple events such as descriptions of spatial configurations of objects. Moreover, previous data mainly come from late adult signers. There is not much known about language development of late signing children soon after learning sign language. We compared simple event descriptions of late signers of Turkish Sign Language (adults, children) to age-matched native signers. Our results indicate that while late signers in both age groups are native-like in frequency of expressing a relational encoding, they lag behind native signers in using morphologically complex linguistic forms compared to other simple forms. Late signing children perform similar to adults and thus showed no development over time.

Share this page