Publications

Displaying 301 - 400 of 921
  • Hintz, F., & Meyer, A. S. (2015). Prediction and production of simple mathematical equations: Evidence from anticipatory eye movements. PLoS One, 10(7): e0130766. doi:10.1371/journal.pone.0130766.

    Abstract

    The relationship between the production and the comprehension systems has recently become a topic of interest for many psycholinguists. It has been argued that these systems are tightly linked and in particular that listeners use the production system to predict upcoming content. In this study, we tested how similar production and prediction processes are in a novel version of the visual world paradigm. Dutch speaking participants (native speakers in Experiment 1; German-Dutch bilinguals in Experiment 2) listened to mathematical equations while looking at a clock face featuring the numbers 1 to 12. On alternating trials, they either heard a complete equation ("three plus eight is eleven") or they heard the first part ("three plus eight is") and had to produce the result ("eleven") themselves. Participants were encouraged to look at the relevant numbers throughout the trial. Their eye movements were recorded and analyzed. We found that the participants' eye movements in the two tasks were overall very similar. They fixated the first and second number of the equations shortly after they were mentioned, and fixated the result number well before they named it on production trials and well before the recorded speaker named it on comprehension trials. However, all fixation latencies were shorter on production than on comprehension trials. These findings suggest that the processes involved in planning to say a word and anticipating hearing a word are quite similar, but that people are more aroused or engaged when they intend to respond than when they merely listen to another person.

    Additional information

    Data availability
  • Hintz, F., Khoe, Y. H., Strauß, A., Psomakas, A. J. A., & Holler, J. (2023). Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension. Cognitive, Affective and Behavioral Neuroscience, 23, 340-353. doi:10.3758/s13415-023-01074-8.

    Abstract

    In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words.
  • Hintz, F., Voeten, C. C., & Scharenborg, O. (2023). Recognizing non-native spoken words in background noise increases interference from the native language. Psychonomic Bulletin & Review, 30, 1549-1563. doi:10.3758/s13423-022-02233-7.

    Abstract

    Listeners frequently recognize spoken words in the presence of background noise. Previous research has shown that noise reduces phoneme intelligibility and hampers spoken-word recognition—especially for non-native listeners. In the present study, we investigated how noise influences lexical competition in both the non-native and the native language, reflecting the degree to which both languages are co-activated. We recorded the eye movements of native Dutch participants as they listened to English sentences containing a target word while looking at displays containing four objects. On target-present trials, the visual referent depicting the target word was present, along with three unrelated distractors. On target-absent trials, the target object (e.g., wizard) was absent. Instead, the display contained an English competitor, overlapping with the English target in phonological onset (e.g., window), a Dutch competitor, overlapping with the English target in phonological onset (e.g., wimpel, pennant), and two unrelated distractors. Half of the sentences was masked by speech-shaped noise; the other half was presented in quiet. Compared to speech in quiet, noise delayed fixations to the target objects on target-present trials. For target-absent trials, we observed that the likelihood for fixation biases towards the English and Dutch onset competitors (over the unrelated distractors) was larger in noise than in quiet. Our data thus show that the presence of background noise increases lexical competition in the task-relevant non-native (English) and in the task-irrelevant native (Dutch) language. The latter reflects stronger interference of one’s native language during non-native spoken-word recognition under adverse conditions.

    Additional information

    table 2 target-absent items
  • Hoedemaker, R. S., & Meyer, A. S. (2019). Planning and coordination of utterances in a joint naming task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(4), 732-752. doi:10.1037/xlm0000603.

    Abstract

    Dialogue requires speakers to coordinate. According to the model of dialogue as joint action, interlocutors achieve this coordination by corepresenting their own and each other’s task share in a functionally equivalent manner. In two experiments, we investigated this corepresentation account using an interactive joint naming task in which pairs of participants took turns naming sets of objects on a shared display. Speaker A named the first, or the first and third object, and Speaker B named the second object. In control conditions, Speaker A named one, two, or all three objects and Speaker B remained silent. We recorded the timing of the speakers’ utterances and Speaker A’s eye movements. Interturn pause durations indicated that the speakers effectively coordinated their utterances in time. Speaker A’s speech onset latencies depended on the number of objects they named, but were unaffected by Speaker B’s naming task. This suggests speakers were not fully incorporating their partner’s task into their own speech planning. Moreover, Speaker A’s eye movements indicated that they were much less likely to attend to objects their partner named than to objects they named themselves. When speakers did inspect their partner’s objects, viewing times were too short to suggest that speakers were retrieving these object names as if they were planning to name the objects themselves. These results indicate that speakers prioritized planning their own responses over attending to their interlocutor’s task and suggest that effective coordination can be achieved without full corepresentation of the partner’s task.
  • Hoey, E. (2015). Lapses: How people arrive at, and deal with, discontinuities in talk. Research on Language and Social Interaction, 48(4), 430-453. doi:10.1080/08351813.2015.1090116.

    Abstract

    Interaction includes moments of silence. When all participants forgo the option to speak, the silence can be called a “lapse.” This article builds on existing work on lapses and other kinds of silences (gaps, pauses, and so on) to examine how participants reach a point where lapsing is a possibility and how they orient to the lapse that subsequently develops. Drawing from a wide range of activities and settings, I will show that participants may treat lapses as (a) the relevant cessation of talk, (b) the allowable development of silence, or (c) the conspicuous absence of talk. Data are in American and British English.
  • Holler, J., Kendrick, K. H., Casillas, M., & Levinson, S. C. (2015). Editorial: Turn-taking in human communicative interaction. Frontiers in Psychology, 6: 1919. doi:10.3389/fpsyg.2015.01919.
  • Holler, J., Kokal, I., Toni, I., Hagoort, P., Kelly, S. D., & Ozyurek, A. (2015). Eye’m talking to you: Speakers’ gaze direction modulates co-speech gesture processing in the right MTG. Social Cognitive & Affective Neuroscience, 10, 255-261. doi:10.1093/scan/nsu047.

    Abstract

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture.
    Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that
    were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts.
  • Holler, J., & Levinson, S. C. (2019). Multimodal language processing in human communication. Trends in Cognitive Sciences, 23(8), 639-652. doi:10.1016/j.tics.2019.05.006.

    Abstract

    Multiple layers of visual (and vocal) signals, plus their different onsets and offsets, represent a significant semantic and temporal binding problem during face-to-face conversation.
    Despite this complex unification process, multimodal messages appear to be processed faster than unimodal messages.

    Multimodal gestalt recognition and multilevel prediction are proposed to play a crucial role in facilitating multimodal language processing.

    The basis of the processing mechanisms involved in multimodal language comprehension is hypothesized to be domain general, coopted for communication, and refined with domain-specific characteristics.
    A new, situated framework for understanding human language processing is called for that takes into consideration the multilayered, multimodal nature of language and its production and comprehension in conversational interaction requiring fast processing.
  • Holler, J., & Kendrick, K. H. (2015). Unaddressed participants’ gaze in multi-person interaction: Optimizing recipiency. Frontiers in Psychology, 6: 98. doi:10.3389/fpsyg.2015.00098.

    Abstract

    One of the most intriguing aspects of human communication is its turn-taking system. It requires the ability to process on-going turns at talk while planning the next, and to launch this next turn without considerable overlap or delay. Recent research has investigated the eye movements of observers of dialogues to gain insight into how we process turns at talk. More specifically, this research has focused on the extent to which we are able to anticipate the end of current and the beginning of next turns. At the same time, there has been a call for shifting experimental paradigms exploring social-cognitive processes away from passive observation towards online processing. Here, we present research that responds to this call by situating state-of-the-art technology for tracking interlocutors’ eye movements within spontaneous, face-to-face conversation. Each conversation involved three native speakers of English. The analysis focused on question-response sequences involving just two of those participants, thus rendering the third momentarily unaddressed. Temporal analyses of the unaddressed participants’ gaze shifts from current to next speaker revealed that unaddressed participants are able to anticipate next turns, and moreover, that they often shift their gaze towards the next speaker before the current turn ends. However, an analysis of the complex structure of turns at talk revealed that the planning of these gaze shifts virtually coincides with the points at which the turns first become recog-nizable as possibly complete. We argue that the timing of these eye movements is governed by an organizational principle whereby unaddressed participants shift their gaze at a point that appears interactionally most optimal: It provides unaddressed participants with access to much of the visual, bodily behavior that accompanies both the current speaker’s and the next speaker’s turn, and it allows them to display recipiency with regard to both speakers’ turns.
  • De Hoop, H., Levshina, N., & Segers, M. (2023). The effect of the use of T and V pronouns in Dutch HR communication. Journal of Pragmatics, 203, 96-109. doi:10.1016/j.pragma.2022.11.017.

    Abstract

    In an online experiment among native speakers of Dutch we measured addressees' responses to emails written in the informal pronoun T or the formal pronoun V in HR communication. 172 participants (61 male, mean age 37 years) read either the V-versions or the T-versions of two invitation emails and two rejection emails by four different fictitious recruiters. After each email, participants had to score their appreciation of the company and the recruiter on five different scales each, such as The recruiter who wrote this email seems … [scale from friendly to unfriendly]. We hypothesized that (i) the V-pronoun would be more appreciated in letters of rejection, and the T-pronoun in letters of invitation, and (ii) older people would appreciate the V-pronoun more than the T-pronoun, and the other way around for younger people. Although neither of these hypotheses was supported, we did find a small effect of pronoun: Emails written in V were more highly appreciated than emails in T, irrespective of type of email (invitation or rejection), and irrespective of the participant's age, gender, and level of education. At the same time, we observed differences in the strength of this effect across different scales.
  • Hörpel, S. G., & Firzlaff, U. (2019). Processing of fast amplitude modulations in bat auditory cortex matches communication call-specific sound features. Journal of Neurophysiology, 121(4), 1501-1512. doi:10.1152/jn.00748.2018.
  • Horschig, J. M., Smolders, R., Bonnefond, M., Schoffelen, J.-M., Van den Munckhof, P., Schuurman, P. R., Cools, R., Denys, D., & Jensen, O. (2015). Directed communication between nucleus accumbens and neocortex in humans is differentially supported by synchronization in the theta and alpha band. PLoS One, 10(9): e0138685. doi:10.1371/journal.pone.0138685.

    Abstract

    Here, we report evidence for oscillatory bi-directional interactions between the nucleus accumbens and the neocortex in humans. Six patients performed a demanding covert visual attention task while we simultaneously recorded brain activity from deep-brain electrodes implanted in the nucleus accumbens and the surface electroencephalogram (EEG). Both theta and alpha oscillations were strongly coherent with the frontal and parietal EEG during the task. Theta-band coherence increased during processing of the visual stimuli. Granger causality analysis revealed that the nucleus accumbens was communicating with the neocortex primarily in the theta-band, while the cortex was communicating the nucleus accumbens in the alpha-band. These data are consistent with a model, in which theta- and alpha-band oscillations serve dissociable roles: Prior to stimulus processing, the cortex might suppress ongoing processing in the nucleus accumbens by modulating alpha-band activity. Subsequently, upon stimulus presentation, theta oscillations might facilitate the active exchange of stimulus information from the nucleus accumbens to the cortex.
  • Horton, S., Jackson, V., Boyce, J., Franken, M.-C., Siemers, S., St John, M., Hearps, S., Van Reyk, O., Braden, R., Parker, R., Vogel, A. P., Eising, E., Amor, D. J., Irvine, J., Fisher, S. E., Martin, N. G., Reilly, S., Bahlo, M., Scheffer, I., & Morgan, A. (2023). Self-reported stuttering severity is accurate: Informing methods for large-scale data collection in stuttering. Journal of Speech, Language, and Hearing Research. Advance online publication. doi:10.1044/2023_JSLHR-23-00081.

    Abstract

    Purpose:
    To our knowledge, there are no data examining the agreement between self-reported and clinician-rated stuttering severity. In the era of big data, self-reported ratings have great potential utility for large-scale data collection, where cost and time preclude in-depth assessment by a clinician. Equally, there is increasing emphasis on the need to recognize an individual's experience of their own condition. Here, we examined the agreement between self-reported stuttering severity compared to clinician ratings during a speech assessment. As a secondary objective, we determined whether self-reported stuttering severity correlated with an individual's subjective impact of stuttering.

    Method:
    Speech-language pathologists conducted face-to-face speech assessments with 195 participants (137 males) aged 5–84 years, recruited from a cohort of people with self-reported stuttering. Stuttering severity was rated on a 10-point scale by the participant and by two speech-language pathologists. Participants also completed the Overall Assessment of the Subjective Experience of Stuttering (OASES). Clinician and participant ratings were compared. The association between stuttering severity and the OASES scores was examined.

    Results:
    There was a strong positive correlation between speech-language pathologist and participant-reported ratings of stuttering severity. Participant-reported stuttering severity correlated weakly with the four OASES domains and with the OASES overall impact score.

    Conclusions:
    Participants were able to accurately rate their stuttering severity during a speech assessment using a simple one-item question. This finding indicates that self-report stuttering severity is a suitable method for large-scale data collection. Findings also support the collection of self-report subjective experience data using questionnaires, such as the OASES, which add vital information about the participants' experience of stuttering that is not captured by overt speech severity ratings alone.
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • Howe, L., Lawson, D. J., Davies, N. M., St Pourcain, B., Lewis, S. J., Smith, G. D., & Hemani, G. (2019). Genetic evidence for assortative mating on alcohol consumption in the UK Biobank. Nature Communications, 10: 5039. doi:10.1038/s41467-019-12424-x.

    Abstract

    Alcohol use is correlated within spouse-pairs, but it is difficult to disentangle effects of alcohol consumption on mate-selection from social factors or the shared spousal environment. We hypothesised that genetic variants related to alcohol consumption may, via their effect on alcohol behaviour, influence mate selection. Here, we find strong evidence that an individual’s self-reported alcohol consumption and their genotype at rs1229984, a missense variant in ADH1B, are associated with their partner’s self-reported alcohol use. Applying Mendelian randomization, we estimate that a unit increase in an individual’s weekly alcohol consumption increases partner’s alcohol consumption by 0.26 units (95% C.I. 0.15, 0.38; P = 8.20 × 10−6). Furthermore, we find evidence of spousal genotypic concordance for rs1229984, suggesting that spousal concordance for alcohol consumption existed prior to cohabitation. Although the SNP is strongly associated with ancestry, our results suggest some concordance independent of population stratification. Our findings suggest that alcohol behaviour directly influences mate selection.
  • Howe, L. J., Richardson, T. G., Arathimos, R., Alvizi, L., Passos-Bueno, M. R., Stanier, P., Nohr, E., Ludwig, K. U., Mangold, E., Knapp, M., Stergiakouli, E., St Pourcain, B., Smith, G. D., Sandy, J., Relton, C. L., Lewis, S. J., Hemani, G., & Sharp, G. C. (2019). Evidence for DNA methylation mediating genetic liability to non-syndromic cleft lip/palate. Epigenomics, 11(2), 133-145. doi:10.2217/epi-2018-0091.

    Abstract

    Aim: To determine if nonsyndromic cleft lip with or without cleft palate (nsCL/P) genetic risk variants influence liability to nsCL/P through gene regulation pathways, such as those involving DNA methylation. Materials & methods: nsCL/P genetic summary data and methylation data from four studies were used in conjunction with Mendelian randomization and joint likelihood mapping to investigate potential mediation of nsCL/P genetic variants. Results & conclusion: Evidence was found at VAX1 (10q25.3), LOC146880 (17q23.3) and NTN1 (17p13.1), that liability to nsCL/P and variation in DNA methylation might be driven by the same genetic variant, suggesting that genetic variation at these loci may increase liability to nsCL/P by influencing DNA methylation. Follow-up analyses using different tissues and gene expression data provided further insight into possible biological mechanisms.

    Additional information

    Supplementary material
  • Li, W., Li, X., Huang, L., Kong, X., Yang, W., Wei, D., Li, J., Cheng, H., Zhang, Q., Qiu, J., & Liu, J. (2015). Brain structure links trait creativity to openness to experience. Social Cognitive and Affective Neuroscience, 10(2), 191-198. doi:10.1093/scan/nsu041.

    Abstract

    Creativity is crucial to the progression of human civilization and has led to important scientific discoveries. Especially, individuals are more likely to have scientific discoveries if they possess certain personality traits of creativity (trait creativity), including imagination, curiosity, challenge and risk-taking. This study used voxel-based morphometry to identify the brain regions underlying individual differences in trait creativity, as measured by the Williams creativity aptitude test, in a large sample (n = 246). We found that creative individuals had higher gray matter volume in the right posterior middle temporal gyrus (pMTG), which might be related to semantic processing during novelty seeking (e.g. novel association, conceptual integration and metaphor understanding). More importantly, although basic personality factors such as openness to experience, extroversion, conscientiousness and agreeableness (as measured by the NEO Personality Inventory) all contributed to trait creativity, only openness to experience mediated the association between the right pMTG volume and trait creativity. Taken together, our results suggest that the basic personality trait of openness might play an important role in shaping an individual’s trait creativity.
  • Hubbard, R. J., Rommers, J., Jacobs, C. L., & Federmeier, K. D. (2019). Downstream behavioral and electrophysiological consequences of word prediction on recognition memory. Frontiers in Human Neuroscience, 13: 291. doi:10.3389/fnhum.2019.00291.

    Abstract

    When people process language, they can use context to predict upcoming information,
    influencing processing and comprehension as seen in both behavioral and neural
    measures. Although numerous studies have shown immediate facilitative effects
    of confirmed predictions, the downstream consequences of prediction have been
    less explored. In the current study, we examined those consequences by probing
    participants’ recognition memory for words after they read sets of sentences.
    Participants read strongly and weakly constraining sentences with expected or
    unexpected endings (“I added my name to the list/basket”), and later were tested on
    their memory for the sentence endings while EEG was recorded. Critically, the memory
    test contained words that were predictable (“list”) but were never read (participants
    saw “basket”). Behaviorally, participants showed successful discrimination between old
    and new items, but false alarmed to the expected-item lures more often than to new
    items, showing that predicted words or concepts can linger, even when predictions
    are disconfirmed. Although false alarm rates did not differ by constraint, event-related
    potentials (ERPs) differed between false alarms to strongly and weakly predictable words.
    Additionally, previously unexpected (compared to previously expected) endings that
    appeared on the memory test elicited larger N1 and LPC amplitudes, suggesting greater
    attention and episodic recollection. In contrast, highly predictable sentence endings that
    had been read elicited reduced LPC amplitudes during the memory test. Thus, prediction
    can facilitate processing in the moment, but can also lead to false memory and reduced
    recollection for predictable information.
  • Hubers, F., Cucchiarini, C., Strik, H., & Dijkstra, T. (2019). Normative data of Dutch idiomatic expressions: Subjective judgments you can bank on. Frontiers in Psychology, 10: 1075. doi:10.3389/fpsyg.2019.01075.

    Abstract

    The processing of idiomatic expressions is a topical issue in empirical research. Various factors have been found to influence idiom processing, such as idiom familiarity and idiom transparency. Information on these variables is usually obtained through norming studies. Studies investigating the effect of various properties on idiom processing have led to ambiguous results. This may be due to the variability of operationalizations of the idiom properties across norming studies, which in turn may affect the reliability of the subjective judgements. However, not all studies that collected normative data on idiomatic expressions investigated their reliability, and studies that did address the reliability of subjective ratings used various measures and produced mixed results. In this study, we investigated the reliability of subjective judgements, the relation between subjective and objective idiom frequency, and the impact of these dimensions on the participants’ idiom knowledge by collecting normative data of five subjective idiom properties (Frequency of Exposure, Meaning Familiarity, Frequency of Usage, Transparency, and Imageability) from 390 native speakers and objective corpus frequency for 374 Dutch idiomatic expressions. For reliability, we compared measures calculated in previous studies, with the D-coefficient, a metric taken from Generalizability Theory. High reliability was found for all subjective dimensions. One reliability metric, Krippendorff’s alpha, generally produced lower values, while similar values were obtained for three other measures (Cronbach’s alpha, Intraclass Correlation Coefficient, and the D-coefficient). Advantages of the D-coefficient are that it can be applied to unbalanced research designs, and to estimate the minimum number of raters required to obtain reliable ratings. Slightly higher coefficients were observed for so-called experience-based dimensions (Frequency of Exposure, Meaning Familiarity, and Frequency of Usage) than for content-based dimensions (Transparency and Imageability). In addition, fewer raters were required to obtain reliable ratings for the experience-based dimensions. Subjective and objective frequency appeared to be poorly correlated, while all subjective idiom properties and objective frequency turned out to affect idiom knowledge. Meaning Familiarity, Subjective and Objective Frequency of Exposure, Frequency of Usage, and Transparency positively contributed to idiom knowledge, while a negative effect was found for Imageability. We discuss these relationships in more detail, and give methodological recommendations with respect to the procedures and the measure to calculate reliability.

    Additional information

    supplementary material
  • Huettig, F., & Pickering, M. (2019). Literacy advantages beyond reading: Prediction of spoken language. Trends in Cognitive Sciences, 23(6), 464-475. doi:10.1016/j.tics.2019.03.008.

    Abstract

    Literacy has many obvious benefits—it exposes the reader to a wealth of new information and enhances syntactic knowledge. However, we argue that literacy has an additional, often overlooked, benefit: it enhances people’s ability to predict spoken language thereby aiding comprehension. Readers are under pressure to process information more quickly than listeners, and reading provides excellent conditions, in particular a stable environment, for training the predictive system. It also leads to increased awareness of words as linguistic units, and more fine-grained phonological and additional orthographic representations, which sharpen lexical representations and facilitate predicted representations to be retrieved. Thus, reading trains core processes and representations involved in language prediction that are common to both reading and listening.
  • Huettig, F., & Guerra, E. (2019). Effects of speech rate, preview time of visual context, and participant instructions reveal strong limits on prediction in language processing. Brain Research, 1706, 196-208. doi:10.1016/j.brainres.2018.11.013.

    Abstract

    There is a consensus among language researchers that people can predict upcoming language. But do people always predict when comprehending language? Notions that “brains … are essentially prediction machines” certainly suggest so. In three eye-tracking experiments we tested this view. Participants listened to simple Dutch sentences (‘Look at the displayed bicycle’) while viewing four objects (a target, e.g. a bicycle, and three unrelated distractors). We used the identical visual stimuli and the same spoken sentences but varied speech rates, preview time, and participant instructions. Target nouns were preceded by definite gender-marked determiners, which allowed participants to predict the target object because only the targets but not the distractors agreed in gender with the determiner. In Experiment 1, participants had four seconds preview and sentences were presented either in a slow or a normal speech rate. Participants predicted the targets as soon as they heard the determiner in both conditions. Experiment 2 was identical except that participants were given only a one second preview. Participants predicted the targets only in the slow speech condition. Experiment 3 was identical to Experiment 2 except that participants were explicitly told to predict. This led only to a small prediction effect in the normal speech condition. Thus, a normal speech rate only afforded prediction if participants had an extensive preview. Even the explicit instruction to predict the target resulted in only a small anticipation effect with a normal speech rate and a short preview. These findings are problematic for theoretical proposals that assume that prediction pervades cognition.
  • Huettig, F., & Brouwer, S. (2015). Delayed anticipatory spoken language processing in adults with dyslexia - Evidence from eye-tracking. Dyslexia, 21(2), 97-122. doi:10.1002/dys.1497.

    Abstract

    It is now well-established that anticipation of up-coming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here we investigated whether anticipatory spoken language processing is related to individuals’ word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., "Kijk naar deCOM afgebeelde pianoCOM", look at the displayed piano) while viewing four objects. Articles (Dutch “het” or “de”) were gender-marked such that the article agreed in gender only with the target and thus participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing.
  • Huettig, F. (2015). Four central questions about prediction in language processing. Brain Research, 1626, 118-135. doi:10.1016/j.brainres.2015.02.014.

    Abstract

    The notion that prediction is a fundamental principle of human information processing has been en vogue over recent years. The investigation of language processing may be particularly illuminating for testing this claim. Linguists traditionally have argued prediction plays only a minor role during language understanding because of the vast possibilities available to the language user as each word is encountered. In the present review I consider four central questions of anticipatory language processing: Why (i.e. what is the function of prediction in language processing)? What (i.e. what are the cues used to predict up-coming linguistic information and what type of representations are predicted)? How (what mechanisms are involved in predictive language processing and what is the role of possible mediating factors such as working memory)? When (i.e. do individuals always predict up-coming input during language processing)? I propose that prediction occurs via a set of diverse PACS (production-, association-, combinatorial-, and simulation-based prediction) mechanisms which are minimally required for a comprehensive account of predictive language processing. Models of anticipatory language processing must be revised to take multiple mechanisms, mediating factors, and situational context into account. Finally, I conjecture that the evidence considered here is consistent with the notion that prediction is an important aspect but not a fundamental principle of language processing.
  • Huettig, F., Voeten, C. C., Pascual, E., Liang, J., & Hintz, F. (2023). Do autistic children differ in language-mediated prediction? Cognition, 239: 105571. doi:10.1016/j.cognition.2023.105571.

    Abstract

    Prediction appears to be an important characteristic of the human mind. It has also been suggested that prediction is a core difference of autistic children. Past research exploring language-mediated anticipatory eye movements in autistic children, however, has been somewhat contradictory, with some studies finding normal anticipatory processing in autistic children with low levels of autistic traits but others observing weaker prediction effects in autistic children with less receptive language skills. Here we investigated language-mediated anticipatory eye movements in young children who differed in the severity of their level of autistic traits and were in professional institutional care in Hangzhou, China. We chose the same spoken sentences (translated into Mandarin Chinese) and visual stimuli as a previous study which observed robust prediction effects in young children (Mani & Huettig, 2012) and included a control group of typically-developing children. Typically developing but not autistic children showed robust prediction effects. Most interestingly, autistic children with lower communication, motor, and (adaptive) behavior scores exhibited both less predictive and non-predictive visual attention behavior. Our results raise the possibility that differences in language-mediated anticipatory eye movements in autistic children with higher levels of autistic traits may be differences in visual attention in disguise, a hypothesis that needs further investigation.
  • Huettig, F., & Ferreira, F. (2023). The myth of normal reading. Perspectives on Psychological Science, 18(4), 863-870. doi:10.1177/17456916221127226.

    Abstract

    We argue that the educational and psychological sciences must embrace the diversity of reading rather than chase the phantom of normal reading behavior. We critically discuss the research practice of asking participants in experiments to read “normally”. We then draw attention to the large cross-cultural and linguistic diversity around the world and consider the enormous diversity of reading situations and goals. Finally, we observe that people bring a huge diversity of brains and experiences to the reading task. This leads to certain implications. First, there are important lessons for how to conduct psycholinguistic experiments. Second, we need to move beyond Anglo-centric reading research and produce models of reading that reflect the large cross-cultural diversity of languages and types of writing systems. Third, we must acknowledge that there are multiple ways of reading and reasons for reading, and none of them is normal or better or a “gold standard”. Finally, we must stop stigmatizing individuals who read differently and for different reasons, and there should be increased focus on teaching the ability to extract information relevant to the person’s goals. What is important is not how well people decode written language and how fast people read but what people comprehend given their own stated goals.
  • Huisman, J. L. A., Majid, A., & Van Hout, R. (2019). The geographical configuration of a language area influences linguistic diversity. PLoS One, 14(6): e0217363. doi:10.1371/journal.pone.0217363.

    Abstract

    Like the transfer of genetic variation through gene flow, language changes constantly as a result of its use in human interaction. Contact between speakers is most likely to happen when they are close in space, time, and social setting. Here, we investigated the role of geographical configuration in this process by studying linguistic diversity in Japan, which comprises a large connected mainland (less isolation, more potential contact) and smaller island clusters of the Ryukyuan archipelago (more isolation, less potential contact). We quantified linguistic diversity using dialectometric methods, and performed regression analyses to assess the extent to which distance in space and time predict contemporary linguistic diversity. We found that language diversity in general increases as geographic distance increases and as time passes—as with biodiversity. Moreover, we found that (I) for mainland languages, linguistic diversity is most strongly related to geographic distance—a so-called isolation-by-distance pattern, and that (II) for island languages, linguistic diversity reflects the time since varieties separated and diverged—an isolation-by-colonisation pattern. Together, these results confirm previous findings that (linguistic) diversity is shaped by distance, but also goes beyond this by demonstrating the critical role of geographic configuration.
  • Huisman, J. L. A., Van Hout, R., & Majid, A. (2023). Cross-linguistic constraints and lineage-specific developments in the semantics of cutting and breaking in Japonic and Germanic. Linguistic Typology, 27(1), 41-75. doi:10.1515/lingty-2021-2090.

    Abstract

    Semantic variation in the cutting and breaking domain has been shown to be constrained across languages in a previous typological study, but it was unclear whether Japanese was an outlier in this domain. Here we revisit cutting and breaking in the Japonic language area by collecting new naming data for 40 videoclips depicting cutting and breaking events in Standard Japanese, the highly divergent Tohoku dialects, as well as four related Ryukyuan languages (Amami, Okinawa, Miyako and Yaeyama). We find that the Japonic languages recapitulate the same semantic dimensions attested in the previous typological study, confirming that semantic variation in the domain of cutting and breaking is indeed cross-linguistically constrained. We then compare our new Japonic data to previously collected Germanic data and find that, in general, related languages resemble each other more than unrelated languages, and that the Japonic languages resemble each other more than the Germanic languages do. Nevertheless, English resembles all of the Japonic languages more than it resembles Swedish. Together, these findings show that the rate and extent of semantic change can differ between language families, indicating the existence of lineage-specific developments on top of universal cross-linguistic constraints.
  • Huizeling, E., Alday, P. M., Peeters, D., & Hagoort, P. (2023). Combining EEG and 3D-eye-tracking to study the prediction of upcoming speech in naturalistic virtual environments: A proof of principle. Neuropsychologia, 191: 108730. doi:10.1016/j.neuropsychologia.2023.108730.

    Abstract

    EEG and eye-tracking provide complementary information when investigating language comprehension. Evidence that speech processing may be facilitated by speech prediction comes from the observation that a listener's eye gaze moves towards a referent before it is mentioned if the remainder of the spoken sentence is predictable. However, changes to the trajectory of anticipatory fixations could result from a change in prediction or an attention shift. Conversely, N400 amplitudes and concurrent spectral power provide information about the ease of word processing the moment the word is perceived. In a proof-of-principle investigation, we combined EEG and eye-tracking to study linguistic prediction in naturalistic, virtual environments. We observed increased processing, reflected in theta band power, either during verb processing - when the verb was predictive of the noun - or during noun processing - when the verb was not predictive of the noun. Alpha power was higher in response to the predictive verb and unpredictable nouns. We replicated typical effects of noun congruence but not predictability on the N400 in response to the noun. Thus, the rich visual context that accompanied speech in virtual reality influenced language processing compared to previous reports, where the visual context may have facilitated processing of unpredictable nouns. Finally, anticipatory fixations were predictive of spectral power during noun processing and the length of time fixating the target could be predicted by spectral power at verb onset, conditional on the object having been fixated. Overall, we show that combining EEG and eye-tracking provides a promising new method to answer novel research questions about the prediction of upcoming linguistic input, for example, regarding the role of extralinguistic cues in prediction during language comprehension.
  • Hulten, A., Schoffelen, J.-M., Udden, J., Lam, N. H. L., & Hagoort, P. (2019). How the brain makes sense beyond the processing of single words – An MEG study. NeuroImage, 186, 586-594. doi:10.1016/j.neuroimage.2018.11.035.

    Abstract

    Human language processing involves combinatorial operations that make human communication stand out in the animal kingdom. These operations rely on a dynamic interplay between the inferior frontal and the posterior temporal cortices. Using source reconstructed magnetoencephalography, we tracked language processing in the brain, in order to investigate how individual words are interpreted when part of sentence context. The large sample size in this study (n = 68) allowed us to assess how event-related activity is associated across distinct cortical areas, by means of inter-areal co-modulation within an individual. We showed that, within 500 ms of seeing a word, the word's lexical information has been retrieved and unified with the sentence context. This does not happen in a strictly feed-forward manner, but by means of co-modulation between the left posterior temporal cortex (LPTC) and left inferior frontal cortex (LIFC), for each individual word. The co-modulation of LIFC and LPTC occurs around 400 ms after the onset of each word, across the progression of a sentence. Moreover, these core language areas are supported early on by the attentional network. The results provide a detailed description of the temporal orchestration related to single word processing in the context of ongoing language.

    Additional information

    1-s2.0-S1053811918321165-mmc1.pdf
  • Hustá, C., Dalmaijer, E., Belopolsky, A., & Mathôt, S. (2019). The pupillary light response reflects visual working memory content. Journal of Experimental Psychology: Human Perception and Performance, 45(11), 1522-1528. doi:10.1037/xhp0000689.

    Abstract

    Recent studies have shown that the pupillary light response (PLR) is modulated by higher cognitive functions, presumably through activity in visual sensory brain areas. Here we use the PLR to test the involvement of sensory areas in visual working memory (VWM). In two experiments, participants memorized either bright or dark stimuli. We found that pupils were smaller when a prestimulus cue indicated that a bright stimulus should be memorized; this reflects a covert shift of attention during encoding of items into VWM. Crucially, we obtained the same result with a poststimulus cue, which shows that internal shifts of attention within VWM affect pupil size as well. Strikingly, the effect of VWM content on pupil size was most pronounced immediately after the poststimulus cue, and then dissipated. This suggests that a shift of attention within VWM momentarily activates an "active" memory representation, but that this representation quickly transforms into a "hidden" state that does not rely on sensory areas.

    Additional information

    Supplementary_xhp0000689.docx
  • Hustá, C., Nieuwland, M. S., & Meyer, A. S. (2023). Effects of picture naming and categorization on concurrent comprehension: Evidence from the N400. Collabra: Psychology, 9(1): 88129. doi:10.1525/collabra.88129.

    Abstract

    n conversations, interlocutors concurrently perform two related processes: speech comprehension and speech planning. We investigated effects of speech planning on comprehension using EEG. Dutch speakers listened to sentences that ended with expected or unexpected target words. In addition, a picture was presented two seconds after target onset (Experiment 1) or 50 ms before target onset (Experiment 2). Participants’ task was to name the picture or to stay quiet depending on the picture category. In Experiment 1, we found a strong N400 effect in response to unexpected compared to expected target words. Importantly, this N400 effect was reduced in Experiment 2 compared to Experiment 1. Unexpectedly, the N400 effect was not smaller in the naming compared to categorization condition. This indicates that conceptual preparation or the decision whether to speak (taking place in both task conditions of Experiment 2) rather than processes specific to word planning interfere with comprehension.
  • Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2019). How in-group bias influences source memory for words learned from in-group and out-group speakers. Frontiers in Human Neuroscience, 13: 308. doi:10.3389/fnhum.2019.00308.

    Abstract

    Individuals rapidly extract information about others’ social identity, including whether or not they belong to their in-group. Group membership status has been shown to affect how attentively people encode information conveyed by those others. These findings are highly relevant for the field of psycholinguistics where there exists an open debate on how words are represented in the mental lexicon and how abstract or context-specific these representations are. Here, we used a novel word learning paradigm to test our proposal that the group membership status of speakers also affects how speaker-specific representations of novel words are. Participants learned new words from speakers who either attended their own university (in-group speakers) or did not (out-group speakers) and performed a task to measure their individual in-group bias. Then, their source memory of the new words was tested in a recognition test to probe the speaker-specific content of the novel lexical representations and assess how it related to individual in-group biases. We found that speaker group membership and participants’ in-group bias affected participants’ decision biases. The stronger the in-group bias, the more cautious participants were in their decisions. This was particularly applied to in-group related decisions. These findings indicate that social biases can influence recognition threshold. Taking a broader scope, defining how information is represented is a topic of great overlap between the fields of memory and psycholinguistics. Nevertheless, researchers from these fields tend to stay within the theoretical and methodological borders of their own field, missing the chance to deepen their understanding of phenomena that are of common interest. Here we show how methodologies developed in the memory field can be implemented in language research to shed light on an important theoretical issue that relates to the composition of lexical representations.

    Additional information

    Supplementary material
  • Indefrey, P., Brown, C. M., Hellwig, F. M., Amunts, K., Herzog, H., Seitz, R. J., & Hagoort, P. (2001). A neural correlate of syntactic encoding during speech production. Proceedings of the National Academy of Sciences of the United States of America, 98, 5933-5936. doi:10.1073/pnas.101118098.

    Abstract

    Spoken language is one of the most compact and structured ways to convey information. The linguistic ability to structure individual words into larger sentence units permits speakers to express a nearly unlimited range of meanings. This ability is rooted in speakers’ knowledge of syntax and in the corresponding process of syntactic encoding. Syntactic encoding is highly automatized, operates largely outside of conscious awareness, and overlaps closely in time with several other processes of language production. With the use of positron emission tomography we investigated the cortical activations during spoken language production that are related to the syntactic encoding process. In the paradigm of restrictive scene description, utterances varying in complexity of syntactic encoding were elicited. Results provided evidence that the left Rolandic operculum, caudally adjacent to Broca’s area, is involved in both sentence-level and local (phrase-level) syntactic encoding during speaking.
  • Indefrey, P., Hagoort, P., Herzog, H., Seitz, R. J., & Brown, C. M. (2001). Syntactic processing in left prefrontal cortex is independent of lexical meaning. Neuroimage, 14, 546-555. doi:10.1006/nimg.2001.0867.

    Abstract

    In language comprehension a syntactic representation is built up even when the input is semantically uninterpretable. We report data on brain activation during syntactic processing, from an experiment on the detection of grammatical errors in meaningless sentences. The experimental paradigm was such that the syntactic processing was distinguished from other cognitive and linguistic functions. The data reveal that in syntactic error detection an area of the left dorsolateral prefrontal cortex, adjacent to Broca’s area, is specifically involved in the syntactic processing aspects, whereas other prefrontal areas subserve general error detection processes.
  • Ioumpa, K., Graham, S. A., Clausner, T., Fisher, S. E., Van Lier, R., & Van Leeuwen, T. M. (2019). Enhanced self-reported affect and prosocial behaviour without differential physiological responses in mirror-sensory synaesthesia. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190395. doi:10.1098/rstb.2019.0395.

    Abstract

    Mirror-sensory synaesthetes mirror the pain or touch that they observe in other people on their own bodies. This type of synaesthesia has been associated with enhanced empathy. We investigated whether the enhanced empathy of people with mirror-sensory synesthesia influences the experience of situations involving touch or pain and whether it affects their prosocial decision making. Mirror-sensory synaesthetes (N = 18, all female), verified with a touch-interference paradigm, were compared with a similar number of age-matched control individuals (all female). Participants viewed arousing images depicting pain or touch; we recorded subjective valence and arousal ratings, and physiological responses, hypothesizing more extreme reactions in synaesthetes. The subjective impact of positive and negative images was stronger in synaesthetes than in control participants; the stronger the reported synaesthesia, the more extreme the picture ratings. However, there was no evidence for differential physiological or hormonal responses to arousing pictures. Prosocial decision making was assessed with an economic game assessing altruism, in which participants had to divide money between themselves and a second player. Mirror-sensory synaesthetes donated more money than non-synaesthetes, showing enhanced prosocial behaviour, and also scored higher on the Interpersonal Reactivity Index as a measure of empathy. Our study demonstrates the subjective impact of mirror-sensory synaesthesia and its stimulating influence on prosocial behaviour.

    Files private

    Request files
  • Iyer, S., Sam, F. S., DiPrimio, N., Preston, G., Verheijen, J., Murthy, K., Parton, Z., Tsang, H., Lao, J., Morava, E., & Perlstein, E. O. (2019). Repurposing the aldose reductase inhibitor and diabetic neuropathy drug epalrestat for the congenital disorder of glycosylation PMM2-CDG. Disease models & mechanisms, 12(11): UNSP dmm040584. doi:10.1242/dmm.040584.

    Abstract

    Phosphomannomutase 2 deficiency, or PMM2-CDG, is the most common congenital disorder of glycosylation and affects over 1000 patients globally. There are no approved drugs that treat the symptoms or root cause of PMM2-CDG. To identify clinically actionable compounds that boost human PMM2 enzyme function, we performed a multispecies drug repurposing screen using a novel worm model of PMM2-CDG, followed by PMM2 enzyme functional studies in PMM2-CDG patient fibroblasts. Drug repurposing candidates from this study, and drug repurposing candidates from a previously published study using yeast models of PMM2-CDG, were tested for their effect on human PMM2 enzyme activity in PMM2-CDG fibroblasts. Of the 20 repurposing candidates discovered in the worm-based phenotypic screen, 12 were plant-based polyphenols. Insights from structure-activity relationships revealed epalrestat, the only antidiabetic aldose reductase inhibitor approved for use in humans, as a first-in-class PMM2 enzyme activator. Epalrestat increased PMM2 enzymatic activity in four PMM2-CDG patient fibroblast lines with genotypes R141H/F119L, R141H/E139K, R141H/N216I and R141H/F183S. PMM2 enzyme activity gains ranged from 30% to 400% over baseline, depending on genotype. Pharmacological inhibition of aldose reductase by epalrestat may shunt glucose from the polyol pathway to glucose-1,6-bisphosphate, which is an endogenous stabilizer and coactivator of PMM2 homodimerization. Epalrestat is a safe, oral and brain penetrant drug that was approved 27 years ago in Japan to treat diabetic neuropathy in geriatric populations. We demonstrate that epalrestat is the first small molecule activator ofPMM2 enzyme activity with the potential to treat peripheral neuropathy and correct the underlying enzyme deficiency in a majority of pediatric and adult PMM2-CDG patients.

    Additional information

    DMM040584supp.pdf
  • Jadoul, Y., & Ravignani, A. (2023). Modelling the emergence of synchrony from decentralized rhythmic interactions in animal communication. Proceedings of the Royal Society B: Biological Sciences, 290(2003). doi:10.1098/rspb.2023.0876.

    Abstract

    To communicate, an animal's strategic timing of rhythmic signals is crucial. Evolutionary, game-theoretical, and dynamical systems models can shed light on the interaction between individuals and the associated costs and benefits of signalling at a specific time. Mathematical models that study rhythmic interactions from a strategic or evolutionary perspective are rare in animal communication research. But new inspiration may come from a recent game theory model of how group synchrony emerges from local interactions of oscillatory neurons. In the study, the authors analyse when the benefit of joint synchronization outweighs the cost of individual neurons sending electrical signals to each other. They postulate there is a benefit for pairs of neurons to fire together and a cost for a neuron to communicate. The resulting model delivers a variant of a classical dynamical system, the Kuramoto model. Here, we present an accessible overview of the Kuramoto model and evolutionary game theory, and of the 'oscillatory neurons' model. We interpret the model's results and discuss the advantages and limitations of using this particular model in the context of animal rhythmic communication. Finally, we sketch potential future directions and discuss the need to further combine evolutionary dynamics, game theory and rhythmic processes in animal communication studies.
  • Jadoul, Y., Düngen, D., & Ravignani, A. (2023). PyGellermann: a Python tool to generate pseudorandom series for human and non-human animal behavioural experiments. BMC Research Notes, 16: 135. doi:10.1186/s13104-023-06396-x.

    Abstract

    Objective

    Researchers in animal cognition, psychophysics, and experimental psychology need to randomise the presentation order of trials in experimental sessions. In many paradigms, for each trial, one of two responses can be correct, and the trials need to be ordered such that the participant’s responses are a fair assessment of their performance. Specifically, in some cases, especially for low numbers of trials, randomised trial orders need to be excluded if they contain simple patterns which a participant could accidentally match and so succeed at the task without learning.
    Results

    We present and distribute a simple Python software package and tool to produce pseudorandom sequences following the Gellermann series. This series has been proposed to pre-empt simple heuristics and avoid inflated performance rates via false positive responses. Our tool allows users to choose the sequence length and outputs a .csv file with newly and randomly generated sequences. This allows behavioural researchers to produce, in a few seconds, a pseudorandom sequence for their specific experiment. PyGellermann is available at https://github.com/YannickJadoul/PyGellermann.
  • Jago, L. S., Alcock, K., Meints, K., Pine, J. M., & Rowland, C. F. (2023). Language outcomes from the UK-CDI Project: Can risk factors, vocabulary skills and gesture scores in infancy predict later language disorders or concern for language development? Frontiers in Psychology, 14: 1167810. doi:10.3389/fpsyg.2023.1167810.

    Abstract

    At the group level, children exposed to certain health and demographic risk factors, and who have delayed language in early childhood are, more likely to have language problems later in childhood. However, it is unclear whether we can use these risk factors to predict whether an individual child is likely to develop problems with language (e.g., be diagnosed with a developmental language disorder). We tested this in a sample of 146 children who took part in the UK-CDI norming project. When the children were 15–18 months old, 1,210 British parents completed: (a) the UK-CDI (a detailed assessment of vocabulary and gesture use) and (b) the Family Questionnaire (questions about health and demographic risk factors). When the children were between 4 and 6  years, 146 of the same parents completed a short questionnaire that assessed (a) whether children had been diagnosed with a disability that was likely to affect language proficiency (e.g., developmental disability, language disorder, hearing impairment), but (b) also yielded a broader measure: whether the child’s language had raised any concern, either by a parent or professional. Discriminant function analyses were used to assess whether we could use different combinations of 10 risk factors, together with early vocabulary and gesture scores, to identify children (a) who had developed a language-related disability by the age of 4–6 years (20 children, 13.70% of the sample) or (b) for whom concern about language had been expressed (49 children; 33.56%). The overall accuracy of the models, and the specificity scores were high, indicating that the measures correctly identified those children without a language-related disability and whose language was not of concern. However, sensitivity scores were low, indicating that the models could not identify those children who were diagnosed with a language-related disability or whose language was of concern. Several exploratory analyses were carried out to analyse these results further. Overall, the results suggest that it is difficult to use parent reports of early risk factors and language in the first 2 years of life to predict which children are likely to be diagnosed with a language-related disability. Possible reasons for this are discussed.

    Additional information

    follow up questionnaire table S1
  • Janssen, C., Segers, E., McQueen, J. M., & Verhoeven, L. (2019). Comparing effects of instruction on word meaning and word form on early literacy abilities in kindergarten. Early Education and Development, 30(3), 375-399. doi:10.1080/10409289.2018.1547563.

    Abstract

    Research Findings: The present study compared effects of explicit instruction on and practice with the phonological form of words (form-focused instruction) versus explicit instruction on and practice with the meaning of words (meaning-focused instruction). Instruction was given via interactive storybook reading in the kindergarten classroom of children learning Dutch. We asked whether the 2 types of instruction had different effects on vocabulary development and 2 precursors of reading ability—phonological awareness and letter knowledge—and we examined effects on these measures of the ability to learn new words with minimal acoustic-phonetic differences. Learners showed similar receptive target-word vocabulary gain after both types of instruction, but learners who received form-focused vocabulary instruction showed more gain in semantic knowledge of target vocabulary, phonological awareness, and letter knowledge than learners who received meaning-focused vocabulary instruction. Level of ability to learn pairs of words with minimal acoustic-phonetic differences predicted gain in semantic knowledge of target vocabulary and in letter knowledge in the form-focused instruction group only. Practice or Policy: A focus on the form of words during instruction appears to have benefits for young children learning vocabulary.
  • Janssen, C., Segers, E., McQueen, J. M., & Verhoeven, L. (2015). Lexical specificity training effects in second language learners. Language Learning, 65(2), 358-389. doi:10.1111/lang.12102.

    Abstract

    Children who start formal education in a second language may experience slower vocabulary growth in that language and subsequently experience disadvantages in literacy acquisition. The current study asked whether lexical specificity training can stimulate bilingual children's phonological awareness, which is considered to be a precursor to literacy. Therefore, Dutch monolingual and Turkish-Dutch bilingual children were taught new Dutch words with only minimal acoustic-phonetic differences. As a result of this training, the monolingual and the bilingual children improved on phoneme blending, which can be seen as an early aspect of phonological awareness. During training, the bilingual children caught up with the monolingual children on words with phonological overlap between their first language Turkish and their second language Dutch. It is concluded that learning minimal pair words fosters phoneme awareness, in both first and second language preliterate children, and that for second language learners phonological overlap between the two languages positively affects training outcomes, likely due to linguistic transfer
  • Janssen, R., Moisik, S. R., & Dediu, D. (2019). The effects of larynx height on vowel production are mitigated by the active control of articulators. Journal of Phonetics, 74, 1-17. doi:10.1016/j.wocn.2019.02.002.

    Abstract

    The influence of larynx position on vowel articulation is an important topic in understanding speech production, the present-day distribution of linguistic diversity and the evolution of speech and language in our lineage. We introduce here a realistic computer model of the vocal tract, constructed from actual human MRI data, which can learn, using machine learning techniques, to control the articulators in such a way as to produce speech sounds matching as closely as possible to a given set of target vowels. We systematically control the vertical position of the larynx and we quantify the differences between the target and produced vowels for each such position across multiple replications. We report that, indeed, larynx height does affect the accuracy of reproducing the target vowels and the distinctness of the produced vowel system, that there is a “sweet spot” of larynx positions that are optimal for vowel production, but that nevertheless, even extreme larynx positions do not result in a collapsed or heavily distorted vowel space that would make speech unintelligible. Together with other lines of evidence, our results support the view that the vowel space of human languages is influenced by our larynx position, but that other positions of the larynx may also be fully compatible with speech.

    Additional information

    Research Data via Github
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
  • Jiang, J., Chen, C., Dai, B., Shi, G., Liu, L., & Lu, C. (2015). Leader emergence through interpersonal neural synchronization. Proceedings of the National Academy of Sciences of the United States of America, 112(14), 4274-4279. doi:10.1073/pnas.1422930112.

    Abstract

    The neural mechanism of leader emergence is not well understood. This study investigated (i) whether interpersonal neural synchronization (INS) plays an important role in leader emergence, and (ii) whether INS and leader emergence are associated with the frequency or the quality of communications. Eleven three-member groups were asked to perform a leaderless group discussion (LGD) task, and their brain activities were recorded via functional near infrared spectroscopy (fNIRS)-based hyperscanning. Video recordings of the discussions were coded for leadership and communication. Results showed that the INS for the leader–follower (LF) pairs was higher than that for the follower–follower (FF) pairs in the left temporo-parietal junction (TPJ), an area important for social mentalizing. Although communication frequency was higher for the LF pairs than for the FF pairs, the frequency of leader-initiated and follower-initiated communication did not differ significantly. Moreover, INS for the LF pairs was significantly higher during leader-initiated communication than during follower-initiated communications. In addition, INS for the LF pairs during leader-initiated communication was significantly correlated with the leaders’ communication skills and competence, but not their communication frequency. Finally, leadership could be successfully predicted based on INS as well as communication frequency early during the LGD (before half a minute into the task). In sum, this study found that leader emergence was characterized by high-level neural synchronization between the leader and followers and that the quality, rather than the frequency, of communications was associated with synchronization. These results suggest that leaders emerge because they are able to say the right things at the right time.
  • Jin, H., Wang, Q., Yang, Y.-F., Zhang, H., Gao, M. (., Jin, S., Chen, Y. (., Xu, T., Zheng, Y.-R., Chen, J., Xiao, Q., Yang, J., Wang, X., Geng, H., Ge, J., Wang, W.-W., Chen, X., Zhang, L., Zuo, X.-N., & Chuan-Peng, H. (2023). The Chinese Open Science Network (COSN): Building an open science community from scratch. Advances in Methods and Practices in Psychological Science, 6(1): 10.1177/25152459221144986. doi:10.1177/25152459221144986.

    Abstract

    Open Science is becoming a mainstream scientific ideology in psychology and related fields. However, researchers, especially early-career researchers (ECRs) in developing countries, are facing significant hurdles in engaging in Open Science and moving it forward. In China, various societal and cultural factors discourage ECRs from participating in Open Science, such as the lack of dedicated communication channels and the norm of modesty. To make the voice of Open Science heard by Chinese-speaking ECRs and scholars at large, the Chinese Open Science Network (COSN) was initiated in 2016. With its core values being grassroots-oriented, diversity, and inclusivity, COSN has grown from a small Open Science interest group to a recognized network both in the Chinese-speaking research community and the international Open Science community. So far, COSN has organized three in-person workshops, 12 tutorials, 48 talks, and 55 journal club sessions and translated 15 Open Science-related articles and blogs from English to Chinese. Currently, the main social media account of COSN (i.e., the WeChat Official Account) has more than 23,000 subscribers, and more than 1,000 researchers/students actively participate in the discussions on Open Science. In this article, we share our experience in building such a network to encourage ECRs in developing countries to start their own Open Science initiatives and engage in the global Open Science movement. We foresee great collaborative efforts of COSN together with all other local and international networks to further accelerate the Open Science movement.
  • Jodzio, A., Piai, V., Verhagen, L., Cameron, I., & Indefrey, P. (2023). Validity of chronometric TMS for probing the time-course of word production: A modified replication. Cerebral Cortex, 33(12), 7816-7829. doi:10.1093/cercor/bhad081.

    Abstract

    In the present study, we used chronometric TMS to probe the time-course of 3 brain regions during a picture naming task. The left inferior frontal gyrus, left posterior middle temporal gyrus, and left posterior superior temporal gyrus were all separately stimulated in 1 of 5 time-windows (225, 300, 375, 450, and 525 ms) from picture onset. We found posterior temporal areas to be causally involved in picture naming in earlier time-windows, whereas all 3 regions appear to be involved in the later time-windows. However, chronometric TMS produces nonspecific effects that may impact behavior, and furthermore, the time-course of any given process is a product of both the involved processing stages along with individual variation in the duration of each stage. We therefore extend previous work in the field by accounting for both individual variations in naming latencies and directly testing for nonspecific effects of TMS. Our findings reveal that both factors influence behavioral outcomes at the group level, underlining the importance of accounting for individual variations in naming latencies, especially for late processing stages closer to articulation, and recognizing the presence of nonspecific effects of TMS. The paper advances key considerations and avenues for future work using chronometric TMS to study overt production.
  • Jongman, S. R., Roelofs, A., & Meyer, A. S. (2015). Sustained attention in language production: An individual differences investigation. Quarterly Journal of Experimental Psychology, 68, 710-730. doi:10.1080/17470218.2014.964736.

    Abstract

    Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that some form of attention is required. Here, we investigated the contribution of sustained attention, which is the ability to maintain alertness over time. First, the sustained attention ability of participants was measured using auditory and visual continuous performance tasks. Next, the participants described pictures using simple noun phrases while their response times (RTs) and gaze durations were measured. Earlier research has suggested that gaze duration reflects language planning processes up to and including phonological encoding. Individual differences in sustained attention ability correlated with individual differences in the magnitude of the tail of the RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. These results suggest that language production requires sustained attention, especially after phonological encoding.
  • Jongman, S. R., Meyer, A. S., & Roelofs, A. (2015). The role of sustained attention in the production of conjoined noun phrases: An individual differences study. PLoS One, 10(9): e0137557. doi:10.1371/journal.pone.0137557.

    Abstract

    It has previously been shown that language production, performed simultaneously with a nonlinguistic task, involves sustained attention. Sustained attention concerns the ability to maintain alertness over time. Here, we aimed to replicate the previous finding by showing that individuals call upon sustained attention when they plan single noun phrases (e.g., "the carrot") and perform a manual arrow categorization task. In addition, we investigated whether speakers also recruit sustained attention when they produce conjoined noun phrases (e.g., "the carrot and the bucket") describing two pictures, that is, when both the first and second task are linguistic. We found that sustained attention correlated with the proportion of abnormally slow phrase-production responses. Individuals with poor sustained attention displayed a greater number of very slow responses than individuals with better sustained attention. Importantly, this relationship was obtained both for the production of single phrases while performing a nonlinguistic manual task, and the production of noun phrase conjunctions in referring to two spatially separated objects. Inhibition and updating abilities were also measured. These scores did not correlate with our measure of sustained attention, suggesting that sustained attention and executive control are distinct. Overall, the results suggest that planning conjoined noun phrases involves sustained attention, and that language production happens less automatically than has often been assumed.
  • Jordan, F., & Gray, R. D. (2001). Comment on Terrell, Kelly and Rainbird. Current Anthropology, 42(1), 114-115.
  • Jordanoska, I., Kocher, A., & Bendezú-Araujo, R. (2023). Introduction special issue: Marking the truth: A cross-linguistic approach to verum. Zeitschrift für Sprachwissenschaft, 42(3), 429-442. doi:10.1515/zfs-2023-2012.

    Abstract

    This special issue focuses on the theoretical and empirical underpinnings of truth-marking. The names that have been used to refer to this phenomenon include, among others, counter-assertive focus, polar(ity) focus, verum focus, emphatic polarity or simply verum. This terminological variety is suggestive of the wide range of ideas and conceptions that characterizes this research field. This collection aims to get closer to the core of what truly constitutes verum. We want to expand the empirical base and determine the common and diverging properties of truth-marking in the languages of the world. The objective is to set a theoretical and empirical baseline for future research on verum and related phenomena.
  • Jordanoska, I., Kocher, A., & Bendezú-Araujo, R. (Eds.). (2023). Marking the truth: A cross-linguistic approach to verum [Special Issue]. Zeitschrift für Sprachwissenschaft, 42(3). Retrieved from https://www.degruyter.com/journal/key/zfsw/42/3/html.
  • Kakimoto, N., Shimamoto, H., Kitisubkanchana, J., Tsujimoto, T., Senda, Y., Iwamoto, Y., Verdonschot, R. G., Hasegawa, Y., & Murakami, S. (2019). T2 relaxation times of the retrodiscal tissue in patients with temporomandibular joint disorders and in healthy volunteers: A comparative study. Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology, 128(3), 311-318. doi:10.1016/j.oooo.2019.02.005.

    Abstract

    Objective. The aims of this study were to compare the temporomandibular joint (TMJ) retrodiscal tissue T2 relaxation times between patients with temporomandibular disorders (TMDs) and asymptomatic volunteers and to assess the diagnostic potential of this approach.
    Study Design. Patients with TMD (n = 173) and asymptomatic volunteers (n = 17) were examined by using a 1.5-T magnetic resonance scanner. The imaging protocol consisted of oblique sagittal, T2-weighted, 8-echo fast spin echo sequences in the closed mouth position. Retrodiscal tissue T2 relaxation times were obtained. Additionally, disc location and reduction, disc configuration, joint effusion, osteoarthritis, and bone edema or osteonecrosis were classified using MRI scans. The T2 relaxation times of each group were statistically compared.
    Results. Retrodiscal tissue T2 relaxation times were significantly longer in patient groups than in asymptomatic volunteers (P < .01). T2 relaxation times were significantly longer in all of the morphologic categories. The most important variables affecting retrodiscal tissue T2 relaxation times were disc configuration, joint effusion, and osteoarthritis.
    Conclusion. Retrodiscal tissue T2 relaxation times of patients with TMD were significantly longer than those of healthy volunteers. This finding may lead to the development of a diagnostic marker to aid in the early detection of TMDs.
  • Kałamała, P., Chuderski, A., Szewczyk, J., Senderecka, M., & Wodniecka, Z. (2023). Bilingualism caught in a net: A new approach to understanding the complexity of bilingual experience. Journal of Experimental Psychology: General, 152(1), 157-174. doi:10.1037/xge0001263.

    Abstract

    The growing importance of research on bilingualism in psychology and neuroscience motivates the need for a psychometric model that can be used to understand and quantify this phenomenon. This research is the first to meet this need. We reanalyzed two data sets (N = 171 and N = 112) from relatively young adult language-unbalanced bilinguals and asked whether bilingualism is best described by the factor structure or by the network structure. The factor and network models were established on one data set and then validated on the other data set in a fully confirmatory manner. The network model provided the best fit to the data. This implies that bilingualism should be conceptualized as an emergent phenomenon arising from direct and idiosyncratic dependencies among the history of language acquisition, diverse language skills, and language-use practices. These dependencies can be reduced to neither a single universal quotient nor to some more general factors. Additional in-depth network analyses showed that the subjective perception of proficiency along with language entropy and language mixing were the most central indices of bilingualism, thus indicating that these measures can be especially sensitive to variation in the overall bilingual experience. Overall, this work highlights the great potential of psychometric network modeling to gain a more accurate description and understanding of complex (psycho)linguistic and cognitive phenomena.
  • Kamermans, K. L., Pouw, W., Mast, F. W., & Paas, F. (2019). Reinterpretation in visual imagery is possible without visual cues: A validation of previous research. Psychological Research, 83(6), 1237-1250. doi:10.1007/s00426-017-0956-5.

    Abstract

    Is visual reinterpretation of bistable figures (e.g., duck/rabbit figure) in visual imagery possible? Current consensus suggests that it is in principle possible because of converging evidence of quasi-pictorial functioning of visual imagery. Yet, studies that have directly tested and found evidence for reinterpretation in visual imagery, allow for the possibility that reinterpretation was already achieved during memorization of the figure(s). One study resolved this issue, providing evidence for reinterpretation in visual imagery (Mast and Kosslyn, Cognition 86:57-70, 2002). However, participants in that study performed reinterpretations with aid of visual cues. Hence, reinterpretation was not performed with mental imagery alone. Therefore, in this study we assessed the possibility of reinterpretation without visual support. We further explored the possible role of haptic cues to assess the multimodal nature of mental imagery. Fifty-three participants were consecutively presented three to be remembered bistable 2-D figures (reinterpretable when rotated 180 degrees), two of which were visually inspected and one was explored hapticly. After memorization of the figures, a visually bistable exemplar figure was presented to ensure understanding of the concept of visual bistability. During recall, 11 participants (out of 36; 30.6%) who did not spot bistability during memorization successfully performed reinterpretations when instructed to mentally rotate their visual image, but additional haptic cues during mental imagery did not inflate reinterpretation ability. This study validates previous findings that reinterpretation in visual imagery is possible.
  • Kamermans, K. L., Pouw, W., Fassi, L., Aslanidou, A., Paas, F., & Hostetter, A. B. (2019). The role of gesture as simulated action in reinterpretation of mental imagery. Acta Psychologica, 197, 131-142. doi:10.1016/j.actpsy.2019.05.004.

    Abstract

    In two experiments, we examined the role of gesture in reinterpreting a mental image. In Experiment 1, we found that participants gestured more about a figure they had learned through manual exploration than about a figure they had learned through vision. This supports claims that gestures emerge from the activation of perception-relevant actions during mental imagery. In Experiment 2, we investigated whether such gestures have a causal role in affecting the quality of mental imagery. Participants were randomly assigned to gesture, not gesture, or engage in a manual interference task as they attempted to reinterpret a figure they had learned through manual exploration. We found that manual interference significantly impaired participants' success on the task. Taken together, these results suggest that gestures reflect mental imaginings of interactions with a mental image and that these imaginings are critically important for mental manipulation and reinterpretation of that image. However, our results suggest that enacting the imagined movements in gesture is not critically important on this particular task.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2023). Late sign language exposure does not modulate the relation between spatial language and spatial memory in deaf children and adults. Memory & Cognition, 51, 582-600. doi:10.3758/s13421-022-01281-7.

    Abstract

    Prior work with hearing children acquiring a spoken language as their first language shows that spatial language and cognition are related systems and spatial language use predicts spatial memory. Here, we further investigate the extent of this relationship in signing deaf children and adults and ask if late sign language exposure, as well as the frequency and the type of spatial language use that might be affected by late exposure, modulate subsequent memory for spatial relations. To do so, we compared spatial language and memory of 8-year-old late-signing children (after 2 years of exposure to a sign language at the school for the deaf) and late-signing adults to their native-signing counterparts. We elicited picture descriptions of Left-Right relations in Turkish Sign Language (Türk İşaret Dili) and measured the subsequent recognition memory accuracy of the described pictures. Results showed that late-signing adults and children were similar to their native-signing counterparts in how often they encoded the spatial relation. However, late-signing adults but not children differed from their native-signing counterparts in the type of spatial language they used. However, neither late sign language exposure nor the frequency and type of spatial language use modulated spatial memory accuracy. Therefore, even though late language exposure seems to influence the type of spatial language use, this does not predict subsequent memory for spatial relations. We discuss the implications of these findings based on the theories concerning the correspondence between spatial language and cognition as related or rather independent systems.
  • Karlebach, G., & Francks, C. (2015). Lateralization of gene expression in human language cortex. Cortex, 67, 30-36. doi:10.1016/j.cortex.2015.03.003.

    Abstract

    Lateralization is an important aspect of the functional brain architecture for language and other cognitive faculties. The molecular genetic basis of human brain lateralization is unknown, and recent studies have suggested that gene expression in the cerebral cortex is bilaterally symmetrical. Here we have re-analyzed two transcriptomic datasets derived from post mortem human cerebral cortex, with a specific focus on superior temporal and auditory language cortex in adults. We applied an empirical Bayes approach to model differential left-right expression, together with gene ontology analysis and meta-analysis. There was robust and reproducible lateralization of individual genes and gene ontology groups that are likely to fine-tune the electrophysiological and neurotransmission properties of cortical circuits, most notably synaptic transmission, nervous system development and glutamate receptor activity. Our findings anchor the cerebral biology of language to the molecular genetic level. Future research in model systems may determine how these molecular signatures of neurophysiological lateralization effect fine-tuning of cerebral cortical function, differently in the two hemispheres.
  • Kartushina, N., Hervais-Adelman, A., Frauenfelder, U. H., & Golestani, N. (2015). The effect of phonetic production training with visual feedback on the perception and production of foreign speech sounds. The Journal of the Acoustical Society of America, 138(2), 817-832. doi:10.1121/1.4926561.

    Abstract

    Second-language learners often experience major difficulties in producing non-native speech sounds. This paper introduces a training method that uses a real-time analysis of the acoustic properties of vowels produced by non-native speakers to provide them with immediate, trial-by-trial visual feedback about their articulation alongside that of the same vowels produced by native speakers. The Mahalanobis acoustic distance between non-native productions and target native acoustic spaces was used to assess L2 production accuracy. The experiment shows that 1 h of training per vowel improves the production of four non-native Danish vowels: the learners' productions were closer to the corresponding Danish target vowels after training. The production performance of a control group remained unchanged. Comparisons of pre- and post-training vowel discrimination performance in the experimental group showed improvements in perception. Correlational analyses of training-related changes in production and perception revealed no relationship. These results suggest, first, that this training method is effective in improving non-native vowel production. Second, training purely on production improves perception. Finally, it appears that improvements in production and perception do not systematically progress at equal rates within individuals. (C) 2015 Acoustical Society of America.
  • Kaspi, A., Hildebrand, M. S., Jackson, V. E., Braden, R., Van Reyk, O., Howell, T., Debono, S., Lauretta, M., Morison, L., Coleman, M. J., Webster, R., Coman, D., Goel, H., Wallis, M., Dabscheck, G., Downie, L., Baker, E. K., Parry-Fielder, B., Ballard, K., Harrold, E. and 10 moreKaspi, A., Hildebrand, M. S., Jackson, V. E., Braden, R., Van Reyk, O., Howell, T., Debono, S., Lauretta, M., Morison, L., Coleman, M. J., Webster, R., Coman, D., Goel, H., Wallis, M., Dabscheck, G., Downie, L., Baker, E. K., Parry-Fielder, B., Ballard, K., Harrold, E., Ziegenfusz, S., Bennett, M. F., Robertson, E., Wang, L., Boys, A., Fisher, S. E., Amor, D. J., Scheffer, I. E., Bahlo, M., & Morgan, A. T. (2023). Genetic aetiologies for childhood speech disorder: Novel pathways co-expressed during brain development. Molecular Psychiatry, 28, 1647-1663. doi:10.1038/s41380-022-01764-8.

    Abstract

    Childhood apraxia of speech (CAS), the prototypic severe childhood speech disorder, is characterized by motor programming and planning deficits. Genetic factors make substantive contributions to CAS aetiology, with a monogenic pathogenic variant identified in a third of cases, implicating around 20 single genes to date. Here we aimed to identify molecular causation in 70 unrelated probands ascertained with CAS. We performed trio genome sequencing. Our bioinformatic analysis examined single nucleotide, indel, copy number, structural and short tandem repeat variants. We prioritised appropriate variants arising de novo or inherited that were expected to be damaging based on in silico predictions. We identified high confidence variants in 18/70 (26%) probands, almost doubling the current number of candidate genes for CAS. Three of the 18 variants affected SETBP1, SETD1A and DDX3X, thus confirming their roles in CAS, while the remaining 15 occurred in genes not previously associated with this disorder. Fifteen variants arose de novo and three were inherited. We provide further novel insights into the biology of child speech disorder, highlighting the roles of chromatin organization and gene regulation in CAS, and confirm that genes involved in CAS are co-expressed during brain development. Our findings confirm a diagnostic yield comparable to, or even higher, than other neurodevelopmental disorders with substantial de novo variant burden. Data also support the increasingly recognised overlaps between genes conferring risk for a range of neurodevelopmental disorders. Understanding the aetiological basis of CAS is critical to end the diagnostic odyssey and ensure affected individuals are poised for precision medicine trials.
  • Kaufhold, S. P., & Van Leeuwen, E. J. C. (2019). Why intergroup variation matters for understanding behaviour. Biology Letters, 15(11): 20190695. doi:10.1098/rsbl.2019.0695.

    Abstract

    Intergroup variation (IGV) refers to variation between different groups of the same species. While its existence in the behavioural realm has been expected and evidenced, the potential effects of IGV are rarely considered in studies that aim to shed light on the evolutionary origins of human socio-cognition, especially in our closest living relatives—the great apes. Here, by taking chimpanzees as a point of reference, we argue that (i) IGV could plausibly explain inconsistent research findings across numerous topics of inquiry (experimental/behavioural studies on chimpanzees), (ii) understanding the evolutionary origins of behaviour requires an accurate assessment of species' modes of behaving across different socio-ecological contexts, which necessitates a reliable estimation of variation across intraspecific groups, and (iii) IGV in the behavioural realm is increasingly likely to be expected owing to the progressive identification of non-human animal cultures. With these points, and by extrapolating from chimpanzees to generic guidelines, we aim to encourage researchers to explicitly consider IGV as an explanatory variable in future studies attempting to understand the socio-cognitive and evolutionary determinants of behaviour in group-living animals.
  • Kelly, B. F., Kidd, E., & Wigglesworth, G. (2015). Indigenous children's language: Acquisition, preservation and evolution of language in minority contexts. First Language, 35(4-5), 279-285. doi:10.1177/0142723715618056.

    Abstract

    A comprehensive theory of language acquisition must explain how human infants can learn any one of the world’s 7000 or so languages. As such, an important part of understanding how languages are learned is to investigate acquisition across a range of diverse languages and sociocultural contexts. To this end, cross-linguistic and cross-cultural language research has been pervasive in the field of first language acquisition since the early 1980s. In groundbreaking work, Slobin (1985) noted that the study of acquisition in cross-linguistic perspective can be used to reveal both developmental universals and language-specific acquisition patterns. Since this observation there have been several waves of cross-linguistic first language acquisition research, and more recently we have seen a rise in research investigating lesser-known languages. This special issue brings together work on several such languages, spoken in minority contexts. It is the first collection of language development research dedicated to the acquisition of under-studied or little-known languages and by extension, different cultures. Why lesser-known languages, and why minority contexts? First and foremost, acquisition theories need data from different languages, language families and cultural groups across the broadest typological array possible, and yet many theories of acquisition have been developed through analyses of English and other major world languages. Thus they are likely to be skewed by sampling bias. Languages of European origin constitute a small percentage of the total number of languages spoken worldwide. The Ethnologue (2015) lists 7102 languages spoken across the world. Of these, only 286 languages are languages of European origin, a mere 4% of the total number of languages spoken across the planet, and representing approximately only 26% of the total number of language speakers alive today. Compare this to the languages of the Pacific. The Ethnologue lists 1313 languages spoken in the Pacific, constituting 18.5% of the world’s languages. Of these, very few have been described, and even fewer have child language data available. Lieven and Stoll (2010) note that only around 70–80 languages have been the focus of acquisition studies (around 1% of the world’s languages). This somewhat alarming statistic suggests that the time is now ripe for researchers working on lesser-known languages to contribute to the field’s knowledge about how children learn a range of very different languages across differing cultures, and in doing so, for this research to make a contribution to language acquisition theory. The potential benefits are many. First, decades of descriptive work in linguistic typology have culminated in strong challenges to the existence of a Universal Grammar (Evans & Levinson, 2009), a long-held axiom of formal language acquisition theory. To be sure, cross-linguistic work in acquisition has long fuelled this debate (e.g. MacWhinney & Bates, 1989), but only as we collect a greater number of data points will we move closer toward a better understanding of the initial state of the human capacity for language and the types of social and cultural contexts in which language is successfully transmitted. A focus on linguistic diversity enables the investigation and postulation of universals in language acquisition, if and in whatever form they exist. In doing so, we can determine the sorts of things that are evident in child-directed speech, in children’s language production and in adult language, teasing out the threads at the intersection of language, culture and cognition. The study and dissemination of research into lesser-known, under-described languages with small communities significantly contributes to this aim because it not only reflects the diversity of languages present in the world, but provides a better representation of the social and economic conditions under which the majority of the world’s population acquire language (Heinrich, Heins, & Norenzayan, 2010). Related to this point, the study of smaller languages has taken on intense urgency in the past few decades due to the rapid extinction of these languages (Evans, 2010). The Language Documentation movement has toiled tirelessly in the pursuit of documenting languages before they disappear, an effort to which child language researchers have much to offer. Many children acquire smaller and minority languages in rich multilingual environments, where the influence of dominant languages affects acquisition (e.g., Stoll, Zakharko, Moran, Schikowski, & Bickel, 2015). Understanding the acquisition process where systems compete and may be in flux due to language contact, while no small task, will help us understand the social and economic conditions which favour successful preservation of minority languages, which could ultimately equip communities with the tools to stem the flow of language loss. With these points in mind we now turn to the articles in this special issue.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2015). The processing of speech, gesture and action during language comprehension. Psychonomic Bulletin & Review, 22, 517-523. doi:10.3758/s13423-014-0681-7.

    Abstract

    Hand gestures and speech form a single integrated system of meaning during language comprehension, but is gesture processed with speech in a unique fashion? We had subjects watch multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half of the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information contents were congruent, and for the other half, they were incongruent. For all subjects, stimuli in which the gestures and actions were incongruent with the speech produced more errors and longer response times than did stimuli that were congruent, but this effect was less prominent for speech-action stimuli than for speech-gesture stimuli. However, subjects focusing on visual targets were more accurate when processing actions than gestures. These results suggest that although actions may be easier to process than gestures, gestures may be more tightly tied to the processing of accompanying speech.
  • Kempen, G. (1991). Conjunction reduction and gapping in clause-level coordination: An inheritance-based approach. Computational Intelligence, 7, 357-360. doi:10.1111/j.1467-8640.1991.tb00406.x.
  • Kempen, G., Schotel, H., & Hoenkamp, E. (1982). Analyse-door-synthese van Nederlandse zinnen [Abstract]. De Psycholoog, 17, 509.
  • Kempen, G. (2000). Could grammatical encoding and grammatical decoding be subserved by the same processing module? Behavioral and Brain Sciences, 23, 38-39.
  • Kempen, G., & Harbusch, K. (2019). Mutual attraction between high-frequency verbs and clause types with finite verbs in early positions: Corpus evidence from spoken English, Dutch, and German. Language, Cognition and Neuroscience, 34(9), 1140-1151. doi:10.1080/23273798.2019.1642498.

    Abstract

    We report a hitherto unknown statistical relationship between the corpus frequency of finite verbs and their fixed linear positions (early vs. late) in finite clauses of English, Dutch, and German. Compared to the overall frequency distribution of verb lemmas in the corpora, high-frequency finite verbs are overused in main clauses, at the expense of nonfinite verbs. This finite versus nonfinite split of high-frequency verbs is basically absent from subordinate clauses. Furthermore, this “main-clause bias” (MCB) of high-frequency verbs is more prominent in German and Dutch (SOV languages) than in English (an SVO language). We attribute the MCB and its varying effect sizes to faster accessibility of high-frequency finite verbs, which (1) increases the probability for these verbs to land in clauses mandating early verb placement, and (2) boosts the activation of clause plans that assign verbs to early linear positions (in casu: clauses with SVO as opposed to SOV order).

    Additional information

    plcp_a_1642498_sm1530.pdf
  • Kendrick, K. H. (2015). Other-initiated repair in English. Open Linguistics, 1, 164-190. doi:10.2478/opli-2014-0009.

    Abstract

    The practices of other-initiation of repair provide speakers with a set of solutions to one of the most basic problems in conversation: troubles of speaking, hearing, and understanding. Based on a collection of 227 cases systematically identified in a corpus of English conversation, this article describes the formats and practices of other-initiations of repair attested in the corpus and reports their quantitative distribution. In addition to straight other-initiations of repair, the identification of all possible cases also yielded a substantial proportion in which speakers use other-initiations to perform other actions, including non-serious actions, such as jokes and teases, preliminaries to dispreferred responses, and displays of surprise and disbelief. A distinction is made between other-initiations that perform additional actions concurrently and those that formally resemble straight other-initiations but analyzably do not initiate repair as an action.
  • Kendrick, K. H. (2015). The intersection of turn-taking and repair: The timing of other-initiations of repair in conversation. Frontiers in Psychology, 6: 250. doi:10.3389/fpsyg.2015.00250.

    Abstract

    The transitions between turns at talk in conversation tend to occur quickly, with only a slight gap of approximately 100 to 300 ms between them. This estimate of central tendency, however, hides a wealth of complex variation, as a number of factors, such as the type of turns involved, have been shown to influence the timing of turn transitions. This article considers one specific type of turn that does not conform to the statistical trend, namely turns that deal with troubles of speaking, hearing, and understanding, known as other-initiations of repair. The results of a quantitative analysis of 169 other-initiations of repair in face-to-face conversation reveal that the most frequent cases occur after gaps of approximately 700 ms. Furthermore, other-initiations of repair that locate a source of trouble in a prior turn specifically tend to occur after shorter gaps than those that do not, and those that correct errors in a prior turn, while rare, tend to occur without delay. An analysis of the transitions before other-initiations of repair, using methods of conversation analysis, suggests that speakers use the extra time (i) to search for a late recognition of the problematic turn, (ii) to provide an opportunity for the speaker of the problematic turn to resolve the trouble independently, (iii) and to produce visual signals, such as facial gestures. In light of these results, it is argued that other-initiations of repair take priority over other turns at talk in conversation and therefore are not subject to the same rules and constraints that motivate fast turn transitions in general
  • Kendrick, K. H., & Torreira, F. (2015). The timing and construction of preference: A quantitative study. Discourse Processes, 52(4), 255-289. doi:10.1080/0163853X.2014.955997.

    Abstract

    Conversation-analytic research has argued that the timing and construction of preferred responding actions (e.g., acceptances) differ from that of dispreferred responding actions (e.g., rejections), potentially enabling early response prediction by recipients. We examined 195 preferred and dispreferred responding actions in telephone corpora and found that the timing of the most frequent cases of each type did not differ systematically. Only for turn transitions of 700 ms or more was the proportion of dispreferred responding actions clearly greater than that of preferreds. In contrast, an analysis of the timing that included turn formats (i.e., those with or without qualification) revealed clearer differences. Small departures from a normal gap duration decrease the likelihood of a preferred action in a preferred turn format (e.g., a simple “yes”). We propose that the timing of a response is best understood as a turn-constructional feature, the first virtual component of a preferred or dispreferred turn format.
  • Kendrick, K. H., Holler, J., & Levinson, S. C. (2023). Turn-taking in human face-to-face interaction is multimodal: Gaze direction and manual gestures aid the coordination of turn transitions. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210473. doi:10.1098/rstb.2021.0473.

    Abstract

    Human communicative interaction is characterized by rapid and precise turn-taking. This is achieved by an intricate system that has been elucidated in the field of conversation analysis, based largely on the study of the auditory signal. This model suggests that transitions occur at points of possible completion identified in terms of linguistic units. Despite this, considerable evidence exists that visible bodily actions including gaze and gestures also play a role. To reconcile disparate models and observations in the literature, we combine qualitative and quantitative methods to analyse turn-taking in a corpus of multimodal interaction using eye-trackers and multiple cameras. We show that transitions seem to be inhibited when a speaker averts their gaze at a point of possible turn completion, or when a speaker produces gestures which are beginning or unfinished at such points. We further show that while the direction of a speaker's gaze does not affect the speed of transitions, the production of manual gestures does: turns with gestures have faster transitions. Our findings suggest that the coordination of transitions involves not only linguistic resources but also visual gestural ones and that the transition-relevance places in turns are multimodal in nature.

    Additional information

    supplemental material
  • Kholodova, A., Peter, M., Rowland, C. F., Jacob, G., & Allen, S. E. M. (2023). Abstract priming and the lexical boost effect across development in a structurally biased language. Languages, 8: 264. doi:10.3390/languages8040264.

    Abstract

    The present study investigates the developmental trajectory of abstract representations for syntactic structures in children. In a structural priming experiment on the dative alternation in German, we primed children from three different age groups (3–4 years, 5–6 years, 7–8 years) and adults with double object datives (Dora sent Boots the rabbit) or prepositional object datives (Dora sent the rabbit to Boots). Importantly, the prepositional object structure in German is dispreferred and only rarely encountered by young children. While immediate as well as cumulative structural priming effects occurred across all age groups, these effects were strongest in the 3- to 4-year-old group and gradually decreased with increasing age. These results suggest that representations in young children are less stable than in adults and, therefore, more susceptible to adaptation both immediately and across time, presumably due to stronger surprisal. Lexical boost effects, in contrast, were not present in 3- to 4-year-olds but gradually emerged with increasing age, possibly due to limited working-memory capacity in the younger child groups.
  • Wu, Q., Kidd, E., & Goodhew, S. C. (2019). The spatial mapping of concepts in English and Mandarin. Journal of Cognitive Psychology, 31(7), 703-724. doi:10.1080/20445911.2019.1663354.

    Abstract

    English speakers have been shown to map abstract concepts in space, which occurs on both the vertical and horizontal dimensions. For example, words such as God are associated with up and right spatial locations, and words such as Satan with down and left. If the tendency to map concepts in space is a universal property of human cognition, then it is likely that such mappings may be at least partly culturally-specific, since many concepts are themselves language-specific and therefore cultural conventions. Here we investigated whether Mandarin speakers report spatial mapping of concepts, and how these mappings compare with English speakers (i.e. are words with the same meaning associated with the same spatial locations). Across two studies, results showed that both native English and Mandarin speakers reported spatial mapping of concepts, and that the distribution of mappings was highly similar for the two groups. Theoretical implications are discussed.
  • Kidd, E., Arciuli, J., Christiansen, M. H., & Smithson, M. (2023). The sources and consequences of individual differences in statistical learning for language development. Cognitive Development, 66: 101335. doi:10.1016/j.cogdev.2023.101335.

    Abstract

    Statistical learning (SL)—sensitivity to statistical regularities in the environment—has been postulated to support language development. While even young infants are capable of using distributional statistics to learn in linguistic and non-linguistic domains, efforts to measure SL at the level of the individual and link it to language proficiency in individual differences designs have been mixed, which has at least in part been attributed to problems with task reliability. In the current study we present the first prospective longitudinal study of the relationship between both non-linguistic SL (measured with visual stimuli) and linguistic SL (measured with auditory stimuli) and language in a group of English-speaking children. One-hundred and twenty-one (N = 121) children in their first two years of formal schooling (Mage = 6;1 years, Range: 5;2 – 7;2) completed tests of visual SL (VSL) and auditory SL (ASL) and several control variables at time 1. Both forms of SL were then measured every 6 months for the next 18 months, and at the final testing session (time 4) their language proficiency was measured using a standardised test. The results showed that the reliability of the SL tasks increased across the course of the study. A series of path analyses showed that both VSL and ASL independently predicted individual differences in language proficiency at time 4. The evidence is consistent with the suggestion that, when measured reliably, an observable relationship between SL and language proficiency exists. Theoretical and methodological issues are discussed.

    Additional information

    data and code
  • Kidd, E., Chan, A., & Chiu, J. (2015). Cross-linguistic influence in simultaneous Cantonese–English bilingual children's comprehension of relative clauses. Bilingualism: Language and Cognition, 18(3), 438-452. doi:10.1017/S1366728914000649.

    Abstract

    The current study investigated the role of cross-linguistic influence in Cantonese–English bilingual children's comprehension of subject- and object-extracted relative clauses (RCs). Twenty simultaneous Cantonese–English bilingual children (Mage = 8;11, SD = 2;6) and 20 vocabulary-matched Cantonese monolingual children (Mage = 6;4, SD = 1;3) completed a test of Cantonese RC comprehension. The bilingual children also completed a test of English RC comprehension. The results showed that, whereas the monolingual children were equally competent on subject and object RCs, the bilingual children performed significantly better on subject RCs. Error analyses suggested that the bilingual children were most often correctly assigning thematic roles in object RCs, but were incorrectly choosing the RC subject as the head referent. This pervasive error was interpreted to be due to the fact that both Cantonese and English have canonical SVO word order, which creates competition with structures that compete with an object RC analysis.
  • Kidd, E. (2015). Incorporating learning into theories of parsing. Linguistic Approaches to Bilingualism, 5(4), 487-493. doi:10.1075/lab.5.4.08kid.
  • Kidd, E., Tennant, E., & Nitschke, S. (2015). Shared abstract representation of linguistic structure in bilingual sentence comprehension. Psychonomic Bulletin & Review, 22(4), 1062-1067. doi:10.3758/s13423-014-0775-2.

    Abstract

    Although there is strong evidence for shared abstract grammatical structure in bilingual speakers from studies of sentence production, comparable evidence from studies of comprehension is lacking. Twenty-seven (N = 27) English-German bilingual adults participated in a structural priming study where unambiguous English subject and object relative clause (RC) structures were used to prime corresponding subject and object RC interpretations of structurally ambiguous German RCs. The results showed that English object RCs primed significantly greater object RC interpretations in German in comparison to baseline and subject RC prime conditions, but that English subject RC primes did not change the participants’ baseline preferences. This is the first study to report abstract crosslinguistic priming in comprehension. The results specifically suggest that word order overlap supports the integration of syntactic structures from different languages in bilingual speakers, and that these shared representations are used in comprehension as well as production
  • Kim, N., Brehm, L., & Yoshida, M. (2019). The online processing of noun phrase ellipsis and mechanisms of antecedent retrieval. Language, Cognition and Neuroscience, 34(2), 190-213. doi:10.1080/23273798.2018.1513542.

    Abstract

    We investigate whether grammatical information is accessed in processing noun phrase ellipsis (NPE) and other anaphoric constructions. The first experiment used an agreement attraction paradigm to reveal that ungrammatical plural verbs following NPE with an antecedent containing a plural modifier (e.g. Derek’s key to the boxes … and Mary’s_ probably *are safe in the drawer) show similar facilitation to non-elided NPs. The second experiment used the same paradigm to examine a coordination construction without anaphoric elements, and the third examined anaphoric one. Agreement attraction was not observed in either experiment, suggesting that processing NPE is different from processing non-anaphoric coordination constructions or anaphoric one. Taken together, the results indicate that the parser is sensitive to grammatical distinctions at the ellipsis site where it prioritises and retrieves the head at the initial stage of processing and retrieves the local noun within the modifier phrase only when it is necessary in parsing NPE.

    Additional information

    Kim_Brehm_Yoshida_2018sup.pdf
  • Kinoshita, S., Schubert, T., & Verdonschot, R. G. (2019). Allograph priming is based on abstract letter identities: Evidence from Japanese kana. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(1), 183-190. doi:10.1037/xlm0000563.

    Abstract

    It is well-established that allographs like the uppercase and lowercase forms of the Roman alphabet (e.g., a and A) map onto the same "abstract letter identity," orthographic representations that are independent of the visual form. Consistent with this, in the allograph match task ("Are 'a' and 'A' the same letter?"), priming by a masked letter prime is equally robust for visually dissimilar prime-target pairs (e.g., d and D) and similar pairs (e.g., c and C). However, in principle this pattern of priming is also consistent with the possibility that allograph priming is purely phonological, based on the letter name. Because different allographic forms of the same letter, by definition, share a letter name, it is impossible to rule out this possibility a priori. In the present study, we investigated the influence of shared letter names by taking advantage of the fact that Japanese is written in two distinct writing systems, syllabic kana-that has two parallel forms, hiragana and katakana-and logographic kanji. Using the allograph match task, we tested whether a kanji prime with the same pronunciation as the target kana (e.g., both pronounced /i/) produces the same amount of priming as a kana prime in the opposite kana form (e.g.,). We found that the kana primes produced substantially greater priming than the phonologically identical kanji prime. which we take as evidence that allograph priming is based on abstract kana identity, not purely phonology.
  • Kinoshita, S., & Verdonschot, R. G. (2019). On recognizing Japanese katakana words: Explaining the reduced priming with hiragana and mixed-kana identity primes. Journal of Experimental Psychology: Human Perception and Performance, 45(11), 1513-1521. doi:10.1037/xhp0000692.

    Abstract

    The Japanese kana syllabary has 2 allographic forms, hiragana and katakana. As with other allographic variants like the uppercase and lowercase letters of the Roman alphabet, they show robust formindependent priming effects in the allograph match task (e.g., Kinoshita. Schubert. & Verdonschot, 2019). suggesting that they share abstract character-level representations. In direct contradiction, Perea. Nakayama, and Lupker (2017) argued that hiragana and katakana do not share character-level representations. based on their finding of reduced priming with identity prime containing a mix of hiragana and katakana (the mixed-kana prime) relative to the all-katakana identity prime in a lexical-decision task with loanword targets written in katakana. Here we sought to reconcile these seemingly contradictory claims, using mixed-kana. hiragana, and katakana primes in lexical decision. The mixed-kana prime and hiragana prime produced priming effects that are indistinguishable, and both were reduced in size relative to the priming effect produced by the katakana identity prime. Furthermore, this pattern was unchanged when the target was presented in hiragana. The findings are interpreted in terms of the assumption that the katakana format is specified in the orthographic representation of loanwords in Japanese readers. Implications of the account for the universality across writing systems is discussed.
  • De Kleijn, R., Wijnen, M., & Poletiek, F. H. (2019). The effect of context-dependent information and sentence constructions on perceived humanness of an agent in a Turing test. Knowledge-Based Systems, 163, 794-799. doi:10.1016/j.knosys.2018.10.006.

    Abstract

    In a Turing test, a judge decides whether their conversation partner is either a machine or human. What cues does the judge use to determine this? In particular, are presumably unique features of human language actually perceived as humanlike? Participants rated the humanness of a set of sentences that were manipulated for grammatical construction: linear right-branching or hierarchical center-embedded and their plausibility with regard to world knowledge.

    We found that center-embedded sentences are perceived as less humanlike than right-branching sentences and more plausible sentences are regarded as more humanlike. However, the effect of plausibility of the sentence on perceived humanness is smaller for center-embedded sentences than for right-branching sentences.

    Participants also rated a conversation with either correct or incorrect use of the context by the agent. No effect of context use was found. Also, participants rated a full transcript of either a real human or a real chatbot, and we found that chatbots were reliably perceived as less humanlike than real humans, in line with our expectation. We did, however, find individual differences between chatbots and humans.
  • Klein, M., Van der Vloet, M., Harich, B., Van Hulzen, K. J., Onnink, A. M. H., Hoogman, M., Guadalupe, T., Zwiers, M., Groothuismink, J. M., Verberkt, A., Nijhof, B., Castells-Nobau, A., Faraone, S. V., Buitelaar, J. K., Schenck, A., Arias-Vasquez, A., Franke, B., & Psychiatric Genomics Consortium ADHD Working Group (2015). Converging evidence does not support GIT1 as an ADHD risk gene. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 168, 492-507. doi:10.1002/ajmg.b.32327.

    Abstract

    Attention-Deficit/Hyperactivity Disorder (ADHD) is a common neuropsychiatric disorder with a complex genetic background. The G protein-coupled receptor kinase interacting ArfGAP 1 (GIT1) gene was previously associated with ADHD. We aimed at replicating the association of GIT1 with ADHD and investigated its role in cognitive and brain phenotypes. Gene-wide and single variant association analyses for GIT1 were performed for three cohorts: (1) the ADHD meta-analysis data set of the Psychiatric Genomics Consortium (PGC, N=19,210), (2) the Dutch cohort of the International Multicentre persistent ADHD CollaboraTion (IMpACT-NL, N=225), and (3) the Brain Imaging Genetics cohort (BIG, N=1,300). Furthermore, functionality of the rs550818 variant as an expression quantitative trait locus (eQTL) for GIT1 was assessed in human blood samples. By using Drosophila melanogaster as a biological model system, we manipulated Git expression according to the outcome of the expression result and studied the effect of Git knockdown on neuronal morphology and locomotor activity. Association of rs550818 with ADHD was not confirmed, nor did a combination of variants in GIT1 show association with ADHD or any related measures in either of the investigated cohorts. However, the rs550818 risk-genotype did reduce GIT1 expression level. Git knockdown in Drosophila caused abnormal synapse and dendrite morphology, but did not affect locomotor activity. In summary, we could not confirm GIT1 as an ADHD candidate gene, while rs550818 was found to be an eQTL for GIT1. Despite GIT1's regulation of neuronal morphology, alterations in gene expression do not appear to have ADHD-related behavioral consequences
  • Klein, W., & Rieck, B.-O. (1982). Der Erwerb der Personalpronomina im ungesteuerten Spracherwerb. Zeitschrift für Literaturwissenschaft und Linguistik, 45, 35-71.
  • Klein, W. (2001). Ein Gemeinwesen, in dem das Volk herrscht, darf nicht von Gesetzen beherrscht werden, die das Volk nicht versteht. Rechtshistorisches Journal, 20, 621-628.
  • Klein, W. (1982). Einige Bemerkungen zur Frageintonation. Deutsche Sprache, 4, 289-310.

    Abstract

    In the first, critical part of this study, a small sample of simple German sentences with their empirically determined pitch contours is used to demonstrate the incorrectness of numerous currently hold views of German sentence intonation. In the second, more constructive part, several interrogative sentence types are analysed and an attempt is made to show that intonation, besides other functions, indicates the permantently changing 'thematic score' in on-going discourse as well as certain validity claims.
  • Klein, W. (1982). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 12, 7-8.
  • Klein, W. (2000). An analysis of the German perfekt. Language, 76, 358-382.

    Abstract

    The German Perfekt has two quite different temporal readings, as illustrated by the two possible continuations of the sentence Peter hat gearbeitet in i, ii, respectively: (i) Peter hat gearbeitet und ist müde. Peter has worked and is tired. (ii) Peter hat gearbeitet und wollte nicht gestört werden. Peter has worked and wanted not to be disturbed. The first reading essentially corresponds to the English present perfect; the second can take a temporal adverbial with past time reference ('yesterday at five', 'when the phone rang', and so on), and an English translation would require a past tense ('Peter worked/was working'). This article shows that the Perfekt has a uniform temporal meaning that results systematically from the interaction of its three components-finiteness marking, auxiliary and past participle-and that the two readings are the consequence of a structural ambiguity. This analysis also predicts the properties of other participle constructions, in particular the passive in German.
  • Klein, W., Li, P., & Hendriks, H. (2000). Aspect and assertion in Mandarin Chinese. Natural Language & Linguistic Theory, 18, 723-770. doi:10.1023/A:1006411825993.

    Abstract

    Chinese has a number of particles such as le, guo, zai and zhe that add a particular aspectual value to the verb to which they are attached. There have been many characterisations of this value in the literature. In this paper, we review several existing influential accounts of these particles, including those in Li and Thompson (1981), Smith (1991), and Mangione and Li (1993). We argue that all these characterisations are intuitively plausible, but none of them is precise.We propose that these particles serve to mark which part of the sentence''s descriptive content is asserted, and that their aspectual value is a consequence of this function. We provide a simple and precise definition of the meanings of le, guo, zai and zhe in terms of the relationship between topic time and time of situation, and show the consequences of their interaction with different verb expressions within thisnew framework of interpretation.
  • Klein, W. (2000). Fatale Traditionen. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (120), 11-40.
  • Klein, W. (1991). Geile Binsenbüschel, sehr intime Gespielen: Ein paar Anmerkungen über Arno Schmidt als Übersetzer. Zeitschrift für Literaturwissenschaft und Linguistik, 84, 124-129.
  • Klein, W. (1982). Pronoms personnels et formes d'acquisition. Encrages, 8/9, 42-46.
  • Klein, W. (1991). Raumausdrücke. Linguistische Berichte, 132, 77-114.
  • Klein, W., & Von Stutterheim, C. (1991). Text structure and referential movement. Arbeitsberichte des Forschungsprogramms S&P: Sprache und Pragmatik, 22.
  • Klein, W. (Ed.). (2000). Sprache des Rechts [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (118).
  • Klein, W., & Berliner Arbeitsgruppe (2000). Sprache des Rechts: Vermitteln, Verstehen, Verwechseln. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (118), 7-33.
  • Klein, W. (1991). Was kann sich die Übersetzungswissenschaft von der Linguistik erwarten? Zeitschrift für Literaturwissenschaft und Linguistik, 84, 104-123.
  • Klein, W. (2000). Was uns die Sprache des Rechts über die Sprache sagt. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (118), 115-149.
  • Klein, W. (Ed.). (1982). Zweitspracherwerb [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (45).
  • Klingler, E., De la Rossa, A., Fièvre, S., Devaraju, K., Abe, P., & Jabaudon, D. (2019). A translaminar genetic logic for the circuit identity of intracortically projecting neurons. Current Biology, 29(2), 332-339. doi:10.1016/j.cub.2018.11.071.

    Abstract

    Neurons of the neocortex are organized into six radial layers, which have appeared at different times during evolution, with the superficial layers representing a more recent acquisition. Input to the neocortex predominantly reaches superficial layers (SL, i.e., layers (L) 2-4), while output is generated in deep layers (DL, i.e., L5-6) [1]. Intracortical connections, which bridge input and output pathways, are key components of cortical circuits because they allow the propagation and processing of information within the neocortex. Two main types of intracortically projecting neurons (ICPN) can be distinguished by their axonal features: L4 spiny stellate neurons (SSN) with short axons projecting locally within cortical columns [2, 3, 4, 5], and SL and DL long-range projection neurons, including callosally projecting neurons (CPNSL and CPNDL) [5, 6]. Here, we investigate the molecular hallmarks that distinguish SSN, CPNSL, and CPNDL and relate their transcriptional signatures with their output connectivity. Specifically, taking advantage of the presence of CPN in both SL and DL, we identify lamina-independent genetic hallmarks of a constant projection motif (i.e., interhemispheric projection). By performing unbiased transcriptomic comparisons between CPNSL, CPNDL and SSN, we provide specific molecular profiles for each of these populations and show that target identity supersedes laminar position in defining ICPN transcriptional diversity. Together, these findings reveal a projection-based organization of transcriptional programs across cortical layers, which we propose reflects conserved strategy to protect canonical circuit structure (and hence function) across a diverse range of neuroanatomies.

    Files private

    Request files
  • Knösche, T. R., & Bastiaansen, M. C. M. (2001). Does the Hilbert transform improve accuracy and time resolution of ERD/ERS? Biomedizinische Technik, 46(2), 106-108.
  • Knudsen, B., Fischer, M., & Aschersleben, G. (2015). The development of Arabic digit knowledge in 4-to-7-year-old children. Journal of numerical cognition, 1(1), 21-37. doi:10.5964/jnc.v1i1.4.

    Abstract

    Recent studies indicate that Arabic digit knowledge rather than non-symbolic number knowledge is a key foundation for arithmetic proficiency at the start of a child’s mathematical career. We document the developmental trajectory of 4- to 7-year-olds’ proficiency in accessing magnitude information from Arabic digits in five tasks differing in magnitude manipulation requirements. Results showed that children from 5 years onwards accessed magnitude information implicitly and explicitly, but that 5-year-olds failed to access magnitude information explicitly when numerical magnitude was contrasted with physical magnitude. Performance across tasks revealed a clear developmental trajectory: children traverse from first knowing the cardinal values of number words to recognizing Arabic digits to knowing their cardinal values and, concurrently, their ordinal position. Correlational analyses showed a strong within-child consistency, demonstrating that this pattern is not only reflected in group differences but also in individual performance.

Share this page