Publications

Displaying 301 - 400 of 928
  • Heim, F., Fisher, S. E., Scharff, C., Ten Cate, C., & Riebel, K. (2023). Effects of cortical FoxP1 knockdowns on learned song preference in female zebra finches. eNeuro, 10(3): ENEURO.0328-22.2023. doi:10.1523/ENEURO.0328-22.2023.

    Abstract

    The search for molecular underpinnings of human vocal communication has focused on genes encoding forkhead-box transcription factors, as rare disruptions of FOXP1, FOXP2, and FOXP4 have been linked to disorders involving speech and language deficits. In male songbirds, an animal model for vocal learning, experimentally altered expression levels of these transcription factors impair song production learning. The relative contributions of auditory processing, motor function or auditory-motor integration to the deficits observed after different FoxP manipulations in songbirds are unknown. To examine the potential effects on auditory learning and development, we focused on female zebra finches (Taeniopygia guttata) that do not sing but develop song memories, which can be assayed in operant preference tests. We tested whether the relatively high levels of FoxP1 expression in forebrain areas implicated in female song preference learning are crucial for the development and/or maintenance of this behavior. Juvenile and adult female zebra finches received FoxP1 knockdowns targeted to HVC (proper name) or to the caudomedial mesopallium (CMM). Irrespective of target site and whether the knockdown took place before (juveniles) or after (adults) the sensitive phase for song memorization, all groups preferred their tutor’s song. However, adult females with FoxP1 knockdowns targeted at HVC showed weaker motivation to hear song and weaker song preferences than sham-treated controls, while no such differences were observed after knockdowns in CMM or in juveniles. In summary, FoxP1 knockdowns in the cortical song nucleus HVC were not associated with impaired tutor song memory but reduced motivation to actively request tutor songs.
  • Heinemann, T. (2006). Will you or can't you? Displaying entitlement in interrogative requests. Journal of Pragmatics, 38(7), 1081-1104. doi:10.1016/j.pragma.2005.09.013.

    Abstract

    Interrogative structures such as ‘Could you pass the salt? and ‘Couldn’t you pass the salt?’ can be used for making requests. A study of such pairs within a conversation analytic framework suggests that these are not used interchangeably, and that they have different impacts on the interaction. Focusing on Danish interactions between elderly care recipients and their home help assistants, I demonstrate how the care recipient displays different degrees of stance towards whether she is entitled to make a request or not, depending on whether she formats her request as a positive or a negative interrogative. With a positive interrogative request, the care recipient orients to her request as one she is not entitled to make. This is underscored by other features, such as the use of mitigating devices and the choice of verb. When accounting for this type of request, the care recipient ties the request to the specific situation she is in, at the moment in which the request is produced. In turn, the home help assistant orients to the lack of entitlement by resisting the request. With a negative interrogative request, the care recipient, in contrast, orients to her request as one she is entitled to make. This is strengthened by the choice of verb and the lack of mitigating devices. When such requests are accounted for, the requested task is treated as something that should be routinely performed, and hence as something the home help assistant has neglected to do. In turn, the home help assistant orients to the display of entitlement by treating the request as unproblematic, and by complying with it immediately.
  • Hellwig, B., Allen, S. E. M., Davidson, L., Defina, R., Kelly, B. F., & Kidd, E. (2023). Introduction: The acquisition sketch project. Language Documentation and Conservation Special Publication, 28, 1-3. Retrieved from https://hdl.handle.net/10125/74718.
  • Henderson, L., Coltheart, M., Cutler, A., & Vincent, N. (1988). Preface. Linguistics, 26(4), 519-520. doi:10.1515/ling.1988.26.4.519.
  • Henke, L., Lewis, A. G., & Meyer, L. (2023). Fast and slow rhythms of naturalistic reading revealed by combined eye-tracking and electroencephalography. The Journal of Neuroscience, 43(24), 4461-4469. doi:10.1523/JNEUROSCI.1849-22.2023.

    Abstract

    Neural oscillations are thought to support speech and language processing. They may not only inherit acoustic rhythms, but might also impose endogenous rhythms onto processing. In support of this, we here report that human (both male and female) eye movements during naturalistic reading exhibit rhythmic patterns that show frequency-selective coherence with the EEG, in the absence of any stimulation rhythm. Periodicity was observed in two distinct frequency bands: First, word-locked saccades at 4-5 Hz display coherence with whole-head theta-band activity. Second, fixation durations fluctuate rhythmically at ∼1 Hz, in coherence with occipital delta-band activity. This latter effect was additionally phase-locked to sentence endings, suggesting a relationship with the formation of multi-word chunks. Together, eye movements during reading contain rhythmic patterns that occur in synchrony with oscillatory brain activity. This suggests that linguistic processing imposes preferred processing time scales onto reading, largely independent of actual physical rhythms in the stimulus.
  • Hersh, T. A., Ravignani, A., & Burchardt, L. (2023). Robust rhythm reporting will advance ecological and evolutionary research. Methods in Ecology and Evolution, 14(6), 1398-1407. doi:10.1111/2041-210X.14118.

    Abstract


    Rhythmicity in the millisecond to second range is a fundamental building block of communication and coordinated movement. But how widespread are rhythmic capacities across species, and how did they evolve under different environmental pressures? Comparative research is necessary to answer these questions but has been hindered by limited crosstalk and comparability among results from different study species.
    Most acoustics studies do not explicitly focus on characterising or quantifying rhythm, but many are just a few scrapes away from contributing to and advancing the field of comparative rhythm research. Here, we present an eight-level rhythm reporting framework which details actionable steps researchers can take to report rhythm-relevant metrics. Levels fall into two categories: metric reporting and data sharing. Metric reporting levels include defining rhythm-relevant metrics, providing point estimates of temporal interval variability, reporting interval distributions, and conducting rhythm analyses. Data sharing levels are: sharing audio recordings, sharing interval durations, sharing sound element start and end times, and sharing audio recordings with sound element start/end times.
    Using sounds recorded from a sperm whale as a case study, we demonstrate how each reporting framework level can be implemented on real data. We also highlight existing best practice examples from recent research spanning multiple species. We clearly detail how engagement with our framework can be tailored case-by-case based on how much time and effort researchers are willing to contribute. Finally, we illustrate how reporting at any of the suggested levels will help advance comparative rhythm research.
    This framework will actively facilitate a comparative approach to acoustic rhythms while also promoting cooperation and data sustainability. By quantifying and reporting rhythm metrics more consistently and broadly, new avenues of inquiry and several long-standing, big picture research questions become more tractable. These lines of research can inform not only about the behavioural ecology of animals but also about the evolution of rhythm-relevant phenomena and the behavioural neuroscience of rhythm production and perception. Rhythm is clearly an emergent feature of life; adopting our framework, researchers from different fields and with different study species can help understand why.

    Additional information

    Research Data availability
  • Hervais-Adelman, A., Carlyon, R. P., Johnsrude, I. S., & Davis, M. H. (2012). Brain regions recruited for the effortful comprehension of noise-vocoded words. Language and Cognitive Processes, 27(7-8), 1145-1166. doi:10.1080/01690965.2012.662280.

    Abstract

    We used functional magnetic resonance imaging (fMRI) to investigate the neural basis of comprehension and perceptual learning of artificially degraded [noise vocoded (NV)] speech. Fifteen participants were scanned while listening to 6-channel vocoded words, which are difficult for naive listeners to comprehend, but can be readily learned with appropriate feedback presentations. During three test blocks, we compared responses to potentially intelligible NV words, incomprehensible distorted words and clear speech. Training sessions were interleaved with the test sessions and included paired presentation of clear then noise-vocoded words: a type of feedback that enhances perceptual learning. Listeners' comprehension of NV words improved significantly as a consequence of training. Listening to NV compared to clear speech activated left insula, and prefrontal and motor cortices. These areas, which are implicated in speech production, may play an active role in supporting the comprehension of degraded speech. Elevated activation in the precentral gyrus during paired clear-then-distorted presentations that enhance learning further suggests a role for articulatory representations of speech in perceptual learning of degraded speech.
  • Hervais-Adelman, A., Kumar, U., Mishra, R. K., Tripathi, V. N., Guleria, A., Singh, J. P., Eisner, F., & Huettig, F. (2019). Learning to read recycles visual cortical networks without destruction. Science Advances, 5(9): eaax0262. doi:10.1126/sciadv.aax0262.

    Abstract

    Learning to read is associated with the appearance of an orthographically sensitive brain region known as the visual word form area. It has been claimed that development of this area proceeds by impinging upon territory otherwise available for the processing of culturally relevant stimuli such as faces and houses. In a large-scale functional magnetic resonance imaging study of a group of individuals of varying degrees of literacy (from completely illiterate to highly literate), we examined cortical responses to orthographic and nonorthographic visual stimuli. We found that literacy enhances responses to other visual input in early visual areas and enhances representational similarity between text and faces, without reducing the extent of response to nonorthographic input. Thus, acquisition of literacy in childhood recycles existing object representation mechanisms but without destructive competition.

    Additional information

    aax0262_SM.pdf
  • Heyselaar, E., & Segaert, K. (2019). Memory encoding of syntactic information involves domain-general attentional resources. Evidence from dual-task studies. Quarterly Journal of Experimental Psychology, 72(6), 1285-1296. doi:10.1177/1747021818801249.

    Abstract

    We investigate the type of attention (domain-general or language-specific) used during
    syntactic processing. We focus on syntactic priming: In this task, participants listen to a
    sentence that describes a picture (prime sentence), followed by a picture the participants need
    to describe (target sentence). We measure the proportion of times participants use the
    syntactic structure they heard in the prime sentence to describe the current target sentence as a
    measure of syntactic processing. Participants simultaneously conducted a motion-object
    tracking (MOT) task, a task commonly used to tax domain-general attentional resources. We
    manipulated the number of objects the participant had to track; we thus measured
    participants’ ability to process syntax while their attention is not-, slightly-, or overly-taxed.
    Performance in the MOT task was significantly worse when conducted as a dual-task
    compared to as a single task. We observed an inverted U-shaped curve on priming magnitude
    when conducting the MOT task concurrently with prime sentences (i.e., memory encoding),
    but no effect when conducted with target sentences (i.e., memory retrieval). Our results
    illustrate how, during the encoding of syntactic information, domain-general attention
    differentially affects syntactic processing, whereas during the retrieval of syntactic
    information domain-general attention does not influence syntactic processing
  • Hintz, F., Khoe, Y. H., Strauß, A., Psomakas, A. J. A., & Holler, J. (2023). Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension. Cognitive, Affective and Behavioral Neuroscience, 23, 340-353. doi:10.3758/s13415-023-01074-8.

    Abstract

    In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words.
  • Hintz, F., Voeten, C. C., & Scharenborg, O. (2023). Recognizing non-native spoken words in background noise increases interference from the native language. Psychonomic Bulletin & Review, 30, 1549-1563. doi:10.3758/s13423-022-02233-7.

    Abstract

    Listeners frequently recognize spoken words in the presence of background noise. Previous research has shown that noise reduces phoneme intelligibility and hampers spoken-word recognition—especially for non-native listeners. In the present study, we investigated how noise influences lexical competition in both the non-native and the native language, reflecting the degree to which both languages are co-activated. We recorded the eye movements of native Dutch participants as they listened to English sentences containing a target word while looking at displays containing four objects. On target-present trials, the visual referent depicting the target word was present, along with three unrelated distractors. On target-absent trials, the target object (e.g., wizard) was absent. Instead, the display contained an English competitor, overlapping with the English target in phonological onset (e.g., window), a Dutch competitor, overlapping with the English target in phonological onset (e.g., wimpel, pennant), and two unrelated distractors. Half of the sentences was masked by speech-shaped noise; the other half was presented in quiet. Compared to speech in quiet, noise delayed fixations to the target objects on target-present trials. For target-absent trials, we observed that the likelihood for fixation biases towards the English and Dutch onset competitors (over the unrelated distractors) was larger in noise than in quiet. Our data thus show that the presence of background noise increases lexical competition in the task-relevant non-native (English) and in the task-irrelevant native (Dutch) language. The latter reflects stronger interference of one’s native language during non-native spoken-word recognition under adverse conditions.

    Additional information

    table 2 target-absent items
  • Hoedemaker, R. S., & Meyer, A. S. (2019). Planning and coordination of utterances in a joint naming task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(4), 732-752. doi:10.1037/xlm0000603.

    Abstract

    Dialogue requires speakers to coordinate. According to the model of dialogue as joint action, interlocutors achieve this coordination by corepresenting their own and each other’s task share in a functionally equivalent manner. In two experiments, we investigated this corepresentation account using an interactive joint naming task in which pairs of participants took turns naming sets of objects on a shared display. Speaker A named the first, or the first and third object, and Speaker B named the second object. In control conditions, Speaker A named one, two, or all three objects and Speaker B remained silent. We recorded the timing of the speakers’ utterances and Speaker A’s eye movements. Interturn pause durations indicated that the speakers effectively coordinated their utterances in time. Speaker A’s speech onset latencies depended on the number of objects they named, but were unaffected by Speaker B’s naming task. This suggests speakers were not fully incorporating their partner’s task into their own speech planning. Moreover, Speaker A’s eye movements indicated that they were much less likely to attend to objects their partner named than to objects they named themselves. When speakers did inspect their partner’s objects, viewing times were too short to suggest that speakers were retrieving these object names as if they were planning to name the objects themselves. These results indicate that speakers prioritized planning their own responses over attending to their interlocutor’s task and suggest that effective coordination can be achieved without full corepresentation of the partner’s task.
  • Hoeks, J. C. J., Hendriks, P., Vonk, W., Brown, C. M., & Hagoort, P. (2006). Processing the noun phrase versus sentence coordination ambiguity: Thematic information does not completely eliminate processing difficulty. Quarterly Journal of Experimental Psychology, 59, 1581-1899. doi:10.1080/17470210500268982.

    Abstract

    When faced with the noun phrase (NP) versus sentence (S) coordination ambiguity as in, for example, The thief shot the jeweller and the cop hellip, readers prefer the reading with NP-coordination (e.g., "The thief shot the jeweller and the cop yesterday") over one with two conjoined sentences (e.g., "The thief shot the jeweller and the cop panicked"). A corpus study is presented showing that NP-coordinations are produced far more often than S-coordinations, which in frequency-based accounts of parsing might be taken to explain the NP-coordination preference. In addition, we describe an eye-tracking experiment investigating S-coordinated sentences such as Jasper sanded the board and the carpenter laughed, where the poor thematic fit between carpenter and sanded argues against NP-coordination. Our results indicate that information regarding poor thematic fit was used rapidly, but not without leaving some residual processing difficulty. This is compatible with claims that thematic information can reduce but not completely eliminate garden-path effects.
  • Holler, J., & Levinson, S. C. (2019). Multimodal language processing in human communication. Trends in Cognitive Sciences, 23(8), 639-652. doi:10.1016/j.tics.2019.05.006.

    Abstract

    Multiple layers of visual (and vocal) signals, plus their different onsets and offsets, represent a significant semantic and temporal binding problem during face-to-face conversation.
    Despite this complex unification process, multimodal messages appear to be processed faster than unimodal messages.

    Multimodal gestalt recognition and multilevel prediction are proposed to play a crucial role in facilitating multimodal language processing.

    The basis of the processing mechanisms involved in multimodal language comprehension is hypothesized to be domain general, coopted for communication, and refined with domain-specific characteristics.
    A new, situated framework for understanding human language processing is called for that takes into consideration the multilayered, multimodal nature of language and its production and comprehension in conversational interaction requiring fast processing.
  • Hoogman, M., Rijpkema, M., Janss, L., Brunner, H., Fernandez, G., Buitelaar, J., Franke, B., & Arias-Vásquez, A. (2012). Current self-reported symptoms of attention deficit/hyperactivity disorder are associated with total brain volume in healthy adults. PLoS One, 7(2), e31273. doi:10.1371/journal.pone.0031273.

    Abstract

    Background Reduced total brain volume is a consistent finding in children with Attention Deficit/Hyperactivity Disorder (ADHD). In order to get a better understanding of the neurobiology of ADHD, we take the first step in studying the dimensionality of current self-reported adult ADHD symptoms, by looking at its relation with total brain volume. Methodology/Principal Findings In a sample of 652 highly educated adults, the association between total brain volume, assessed with magnetic resonance imaging, and current number of self-reported ADHD symptoms was studied. The results showed an association between these self-reported ADHD symptoms and total brain volume. Post-hoc analysis revealed that the symptom domain of inattention had the strongest association with total brain volume. In addition, the threshold for impairment coincides with the threshold for brain volume reduction. Conclusions/Significance This finding improves our understanding of the biological substrates of self-reported ADHD symptoms, and suggests total brain volume as a target intermediate phenotype for future gene-finding in ADHD.
  • De Hoop, H., Levshina, N., & Segers, M. (2023). The effect of the use of T and V pronouns in Dutch HR communication. Journal of Pragmatics, 203, 96-109. doi:10.1016/j.pragma.2022.11.017.

    Abstract

    In an online experiment among native speakers of Dutch we measured addressees' responses to emails written in the informal pronoun T or the formal pronoun V in HR communication. 172 participants (61 male, mean age 37 years) read either the V-versions or the T-versions of two invitation emails and two rejection emails by four different fictitious recruiters. After each email, participants had to score their appreciation of the company and the recruiter on five different scales each, such as The recruiter who wrote this email seems … [scale from friendly to unfriendly]. We hypothesized that (i) the V-pronoun would be more appreciated in letters of rejection, and the T-pronoun in letters of invitation, and (ii) older people would appreciate the V-pronoun more than the T-pronoun, and the other way around for younger people. Although neither of these hypotheses was supported, we did find a small effect of pronoun: Emails written in V were more highly appreciated than emails in T, irrespective of type of email (invitation or rejection), and irrespective of the participant's age, gender, and level of education. At the same time, we observed differences in the strength of this effect across different scales.
  • Hörpel, S. G., & Firzlaff, U. (2019). Processing of fast amplitude modulations in bat auditory cortex matches communication call-specific sound features. Journal of Neurophysiology, 121(4), 1501-1512. doi:10.1152/jn.00748.2018.
  • Horton, S., Jackson, V., Boyce, J., Franken, M.-C., Siemers, S., St John, M., Hearps, S., Van Reyk, O., Braden, R., Parker, R., Vogel, A. P., Eising, E., Amor, D. J., Irvine, J., Fisher, S. E., Martin, N. G., Reilly, S., Bahlo, M., Scheffer, I., & Morgan, A. (2023). Self-reported stuttering severity is accurate: Informing methods for large-scale data collection in stuttering. Journal of Speech, Language, and Hearing Research. Advance online publication. doi:10.1044/2023_JSLHR-23-00081.

    Abstract

    Purpose:
    To our knowledge, there are no data examining the agreement between self-reported and clinician-rated stuttering severity. In the era of big data, self-reported ratings have great potential utility for large-scale data collection, where cost and time preclude in-depth assessment by a clinician. Equally, there is increasing emphasis on the need to recognize an individual's experience of their own condition. Here, we examined the agreement between self-reported stuttering severity compared to clinician ratings during a speech assessment. As a secondary objective, we determined whether self-reported stuttering severity correlated with an individual's subjective impact of stuttering.

    Method:
    Speech-language pathologists conducted face-to-face speech assessments with 195 participants (137 males) aged 5–84 years, recruited from a cohort of people with self-reported stuttering. Stuttering severity was rated on a 10-point scale by the participant and by two speech-language pathologists. Participants also completed the Overall Assessment of the Subjective Experience of Stuttering (OASES). Clinician and participant ratings were compared. The association between stuttering severity and the OASES scores was examined.

    Results:
    There was a strong positive correlation between speech-language pathologist and participant-reported ratings of stuttering severity. Participant-reported stuttering severity correlated weakly with the four OASES domains and with the OASES overall impact score.

    Conclusions:
    Participants were able to accurately rate their stuttering severity during a speech assessment using a simple one-item question. This finding indicates that self-report stuttering severity is a suitable method for large-scale data collection. Findings also support the collection of self-report subjective experience data using questionnaires, such as the OASES, which add vital information about the participants' experience of stuttering that is not captured by overt speech severity ratings alone.
  • Howe, L., Lawson, D. J., Davies, N. M., St Pourcain, B., Lewis, S. J., Smith, G. D., & Hemani, G. (2019). Genetic evidence for assortative mating on alcohol consumption in the UK Biobank. Nature Communications, 10: 5039. doi:10.1038/s41467-019-12424-x.

    Abstract

    Alcohol use is correlated within spouse-pairs, but it is difficult to disentangle effects of alcohol consumption on mate-selection from social factors or the shared spousal environment. We hypothesised that genetic variants related to alcohol consumption may, via their effect on alcohol behaviour, influence mate selection. Here, we find strong evidence that an individual’s self-reported alcohol consumption and their genotype at rs1229984, a missense variant in ADH1B, are associated with their partner’s self-reported alcohol use. Applying Mendelian randomization, we estimate that a unit increase in an individual’s weekly alcohol consumption increases partner’s alcohol consumption by 0.26 units (95% C.I. 0.15, 0.38; P = 8.20 × 10−6). Furthermore, we find evidence of spousal genotypic concordance for rs1229984, suggesting that spousal concordance for alcohol consumption existed prior to cohabitation. Although the SNP is strongly associated with ancestry, our results suggest some concordance independent of population stratification. Our findings suggest that alcohol behaviour directly influences mate selection.
  • Howe, L. J., Richardson, T. G., Arathimos, R., Alvizi, L., Passos-Bueno, M. R., Stanier, P., Nohr, E., Ludwig, K. U., Mangold, E., Knapp, M., Stergiakouli, E., St Pourcain, B., Smith, G. D., Sandy, J., Relton, C. L., Lewis, S. J., Hemani, G., & Sharp, G. C. (2019). Evidence for DNA methylation mediating genetic liability to non-syndromic cleft lip/palate. Epigenomics, 11(2), 133-145. doi:10.2217/epi-2018-0091.

    Abstract

    Aim: To determine if nonsyndromic cleft lip with or without cleft palate (nsCL/P) genetic risk variants influence liability to nsCL/P through gene regulation pathways, such as those involving DNA methylation. Materials & methods: nsCL/P genetic summary data and methylation data from four studies were used in conjunction with Mendelian randomization and joint likelihood mapping to investigate potential mediation of nsCL/P genetic variants. Results & conclusion: Evidence was found at VAX1 (10q25.3), LOC146880 (17q23.3) and NTN1 (17p13.1), that liability to nsCL/P and variation in DNA methylation might be driven by the same genetic variant, suggesting that genetic variation at these loci may increase liability to nsCL/P by influencing DNA methylation. Follow-up analyses using different tissues and gene expression data provided further insight into possible biological mechanisms.

    Additional information

    Supplementary material
  • Hribar, A., Haun, D. B. M., & Call, J. (2012). Children’s reasoning about spatial relational similarity: The effect of alignment and relational complexity. Journal of Experimental Child Psychology, 111, 490-500. doi:10.1016/j.jecp.2011.11.004.

    Abstract

    We investigated 4- and 5-year-old children’s mapping strategies in a spatial task. Children were required to find a picture in an array of three identical cups after observing another picture being hidden in another array of three cups. The arrays were either aligned one behind the other in two rows or placed side by side forming one line. Moreover, children were rewarded for two different mapping strategies. Half of the children needed to choose a cup that held the same relative position as the rewarded cup in the other array; they needed to map left–left, middle–middle, and right–right cups together (aligned mapping), which required encoding and mapping of two relations (e.g., the cup left of the middle cup and left of the right cup). The other half needed to map together the cups that held the same relation to the table’s spatial features—the cups at the edges, the middle cups, and the cups in the middle of the table (landmark mapping)—which required encoding and mapping of one relation (e.g., the cup at the table’s edge). Results showed that children’s success was constellation dependent; performance was higher when the arrays were aligned one behind the other in two rows than when they were placed side by side. Furthermore, children showed a preference for landmark mapping over aligned mapping.
  • Hubbard, R. J., Rommers, J., Jacobs, C. L., & Federmeier, K. D. (2019). Downstream behavioral and electrophysiological consequences of word prediction on recognition memory. Frontiers in Human Neuroscience, 13: 291. doi:10.3389/fnhum.2019.00291.

    Abstract

    When people process language, they can use context to predict upcoming information,
    influencing processing and comprehension as seen in both behavioral and neural
    measures. Although numerous studies have shown immediate facilitative effects
    of confirmed predictions, the downstream consequences of prediction have been
    less explored. In the current study, we examined those consequences by probing
    participants’ recognition memory for words after they read sets of sentences.
    Participants read strongly and weakly constraining sentences with expected or
    unexpected endings (“I added my name to the list/basket”), and later were tested on
    their memory for the sentence endings while EEG was recorded. Critically, the memory
    test contained words that were predictable (“list”) but were never read (participants
    saw “basket”). Behaviorally, participants showed successful discrimination between old
    and new items, but false alarmed to the expected-item lures more often than to new
    items, showing that predicted words or concepts can linger, even when predictions
    are disconfirmed. Although false alarm rates did not differ by constraint, event-related
    potentials (ERPs) differed between false alarms to strongly and weakly predictable words.
    Additionally, previously unexpected (compared to previously expected) endings that
    appeared on the memory test elicited larger N1 and LPC amplitudes, suggesting greater
    attention and episodic recollection. In contrast, highly predictable sentence endings that
    had been read elicited reduced LPC amplitudes during the memory test. Thus, prediction
    can facilitate processing in the moment, but can also lead to false memory and reduced
    recollection for predictable information.
  • Hubers, F., Cucchiarini, C., Strik, H., & Dijkstra, T. (2019). Normative data of Dutch idiomatic expressions: Subjective judgments you can bank on. Frontiers in Psychology, 10: 1075. doi:10.3389/fpsyg.2019.01075.

    Abstract

    The processing of idiomatic expressions is a topical issue in empirical research. Various factors have been found to influence idiom processing, such as idiom familiarity and idiom transparency. Information on these variables is usually obtained through norming studies. Studies investigating the effect of various properties on idiom processing have led to ambiguous results. This may be due to the variability of operationalizations of the idiom properties across norming studies, which in turn may affect the reliability of the subjective judgements. However, not all studies that collected normative data on idiomatic expressions investigated their reliability, and studies that did address the reliability of subjective ratings used various measures and produced mixed results. In this study, we investigated the reliability of subjective judgements, the relation between subjective and objective idiom frequency, and the impact of these dimensions on the participants’ idiom knowledge by collecting normative data of five subjective idiom properties (Frequency of Exposure, Meaning Familiarity, Frequency of Usage, Transparency, and Imageability) from 390 native speakers and objective corpus frequency for 374 Dutch idiomatic expressions. For reliability, we compared measures calculated in previous studies, with the D-coefficient, a metric taken from Generalizability Theory. High reliability was found for all subjective dimensions. One reliability metric, Krippendorff’s alpha, generally produced lower values, while similar values were obtained for three other measures (Cronbach’s alpha, Intraclass Correlation Coefficient, and the D-coefficient). Advantages of the D-coefficient are that it can be applied to unbalanced research designs, and to estimate the minimum number of raters required to obtain reliable ratings. Slightly higher coefficients were observed for so-called experience-based dimensions (Frequency of Exposure, Meaning Familiarity, and Frequency of Usage) than for content-based dimensions (Transparency and Imageability). In addition, fewer raters were required to obtain reliable ratings for the experience-based dimensions. Subjective and objective frequency appeared to be poorly correlated, while all subjective idiom properties and objective frequency turned out to affect idiom knowledge. Meaning Familiarity, Subjective and Objective Frequency of Exposure, Frequency of Usage, and Transparency positively contributed to idiom knowledge, while a negative effect was found for Imageability. We discuss these relationships in more detail, and give methodological recommendations with respect to the procedures and the measure to calculate reliability.

    Additional information

    supplementary material
  • Huettig, F., & Pickering, M. (2019). Literacy advantages beyond reading: Prediction of spoken language. Trends in Cognitive Sciences, 23(6), 464-475. doi:10.1016/j.tics.2019.03.008.

    Abstract

    Literacy has many obvious benefits—it exposes the reader to a wealth of new information and enhances syntactic knowledge. However, we argue that literacy has an additional, often overlooked, benefit: it enhances people’s ability to predict spoken language thereby aiding comprehension. Readers are under pressure to process information more quickly than listeners, and reading provides excellent conditions, in particular a stable environment, for training the predictive system. It also leads to increased awareness of words as linguistic units, and more fine-grained phonological and additional orthographic representations, which sharpen lexical representations and facilitate predicted representations to be retrieved. Thus, reading trains core processes and representations involved in language prediction that are common to both reading and listening.
  • Huettig, F., & Guerra, E. (2019). Effects of speech rate, preview time of visual context, and participant instructions reveal strong limits on prediction in language processing. Brain Research, 1706, 196-208. doi:10.1016/j.brainres.2018.11.013.

    Abstract

    There is a consensus among language researchers that people can predict upcoming language. But do people always predict when comprehending language? Notions that “brains … are essentially prediction machines” certainly suggest so. In three eye-tracking experiments we tested this view. Participants listened to simple Dutch sentences (‘Look at the displayed bicycle’) while viewing four objects (a target, e.g. a bicycle, and three unrelated distractors). We used the identical visual stimuli and the same spoken sentences but varied speech rates, preview time, and participant instructions. Target nouns were preceded by definite gender-marked determiners, which allowed participants to predict the target object because only the targets but not the distractors agreed in gender with the determiner. In Experiment 1, participants had four seconds preview and sentences were presented either in a slow or a normal speech rate. Participants predicted the targets as soon as they heard the determiner in both conditions. Experiment 2 was identical except that participants were given only a one second preview. Participants predicted the targets only in the slow speech condition. Experiment 3 was identical to Experiment 2 except that participants were explicitly told to predict. This led only to a small prediction effect in the normal speech condition. Thus, a normal speech rate only afforded prediction if participants had an extensive preview. Even the explicit instruction to predict the target resulted in only a small anticipation effect with a normal speech rate and a short preview. These findings are problematic for theoretical proposals that assume that prediction pervades cognition.
  • Huettig, F., Mishra, R. K., & Olivers, C. N. (2012). Mechanisms and representations of language-mediated visual attention. Frontiers in Psychology, 2, 394. doi:10.3389/fpsyg.2011.00394.

    Abstract

    The experimental investigation of language-mediated visual attention is a promising way to study the interaction of the cognitive systems involved in language, vision, attention, and memory. Here we highlight four challenges for a mechanistic account of this oculomotor behavior: the levels of representation at which language-derived and vision-derived representations are integrated; attentional mechanisms; types of memory; and the degree of individual and group differences. Central points in our discussion are (a) the possibility that local microcircuitries involving feedforward and feedback loops instantiate a common representational substrate of linguistic and non-linguistic information and attention; and (b) that an explicit working memory may be central to explaining interactions between language and visual attention. We conclude that a synthesis of further experimental evidence from a variety of fields of inquiry and the testing of distinct, non-student, participant populations will prove to be critical.
  • Huettig, F., Quinlan, P. T., McDonald, S. A., & Altmann, G. T. M. (2006). Models of high-dimensional semantic space predict language-mediated eye movements in the visual world. Acta Psychologica, 121(1), 65-80. doi:10.1016/j.actpsy.2005.06.002.

    Abstract

    In the visual world paradigm, participants are more likely to fixate a visual referent that has some semantic relationship with a heard word, than they are to fixate an unrelated referent [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language. A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6, 813–839]. Here, this method is used to examine the psychological validity of models of high-dimensional semantic space. The data strongly suggest that these corpus-based measures of word semantics predict fixation behavior in the visual world and provide further evidence that language-mediated eye movements to objects in the concurrent visual environment are driven by semantic similarity rather than all-or-none categorical knowledge. The data suggest that the visual world paradigm can, together with other methodologies, converge on the evidence that may help adjudicate between different theoretical accounts of the psychological semantics.
  • Huettig, F., Voeten, C. C., Pascual, E., Liang, J., & Hintz, F. (2023). Do autistic children differ in language-mediated prediction? Cognition, 239: 105571. doi:10.1016/j.cognition.2023.105571.

    Abstract

    Prediction appears to be an important characteristic of the human mind. It has also been suggested that prediction is a core difference of autistic children. Past research exploring language-mediated anticipatory eye movements in autistic children, however, has been somewhat contradictory, with some studies finding normal anticipatory processing in autistic children with low levels of autistic traits but others observing weaker prediction effects in autistic children with less receptive language skills. Here we investigated language-mediated anticipatory eye movements in young children who differed in the severity of their level of autistic traits and were in professional institutional care in Hangzhou, China. We chose the same spoken sentences (translated into Mandarin Chinese) and visual stimuli as a previous study which observed robust prediction effects in young children (Mani & Huettig, 2012) and included a control group of typically-developing children. Typically developing but not autistic children showed robust prediction effects. Most interestingly, autistic children with lower communication, motor, and (adaptive) behavior scores exhibited both less predictive and non-predictive visual attention behavior. Our results raise the possibility that differences in language-mediated anticipatory eye movements in autistic children with higher levels of autistic traits may be differences in visual attention in disguise, a hypothesis that needs further investigation.
  • Huettig, F., & Ferreira, F. (2023). The myth of normal reading. Perspectives on Psychological Science, 18(4), 863-870. doi:10.1177/17456916221127226.

    Abstract

    We argue that the educational and psychological sciences must embrace the diversity of reading rather than chase the phantom of normal reading behavior. We critically discuss the research practice of asking participants in experiments to read “normally”. We then draw attention to the large cross-cultural and linguistic diversity around the world and consider the enormous diversity of reading situations and goals. Finally, we observe that people bring a huge diversity of brains and experiences to the reading task. This leads to certain implications. First, there are important lessons for how to conduct psycholinguistic experiments. Second, we need to move beyond Anglo-centric reading research and produce models of reading that reflect the large cross-cultural diversity of languages and types of writing systems. Third, we must acknowledge that there are multiple ways of reading and reasons for reading, and none of them is normal or better or a “gold standard”. Finally, we must stop stigmatizing individuals who read differently and for different reasons, and there should be increased focus on teaching the ability to extract information relevant to the person’s goals. What is important is not how well people decode written language and how fast people read but what people comprehend given their own stated goals.
  • Huisman, J. L. A., Majid, A., & Van Hout, R. (2019). The geographical configuration of a language area influences linguistic diversity. PLoS One, 14(6): e0217363. doi:10.1371/journal.pone.0217363.

    Abstract

    Like the transfer of genetic variation through gene flow, language changes constantly as a result of its use in human interaction. Contact between speakers is most likely to happen when they are close in space, time, and social setting. Here, we investigated the role of geographical configuration in this process by studying linguistic diversity in Japan, which comprises a large connected mainland (less isolation, more potential contact) and smaller island clusters of the Ryukyuan archipelago (more isolation, less potential contact). We quantified linguistic diversity using dialectometric methods, and performed regression analyses to assess the extent to which distance in space and time predict contemporary linguistic diversity. We found that language diversity in general increases as geographic distance increases and as time passes—as with biodiversity. Moreover, we found that (I) for mainland languages, linguistic diversity is most strongly related to geographic distance—a so-called isolation-by-distance pattern, and that (II) for island languages, linguistic diversity reflects the time since varieties separated and diverged—an isolation-by-colonisation pattern. Together, these results confirm previous findings that (linguistic) diversity is shaped by distance, but also goes beyond this by demonstrating the critical role of geographic configuration.
  • Huisman, J. L. A., Van Hout, R., & Majid, A. (2023). Cross-linguistic constraints and lineage-specific developments in the semantics of cutting and breaking in Japonic and Germanic. Linguistic Typology, 27(1), 41-75. doi:10.1515/lingty-2021-2090.

    Abstract

    Semantic variation in the cutting and breaking domain has been shown to be constrained across languages in a previous typological study, but it was unclear whether Japanese was an outlier in this domain. Here we revisit cutting and breaking in the Japonic language area by collecting new naming data for 40 videoclips depicting cutting and breaking events in Standard Japanese, the highly divergent Tohoku dialects, as well as four related Ryukyuan languages (Amami, Okinawa, Miyako and Yaeyama). We find that the Japonic languages recapitulate the same semantic dimensions attested in the previous typological study, confirming that semantic variation in the domain of cutting and breaking is indeed cross-linguistically constrained. We then compare our new Japonic data to previously collected Germanic data and find that, in general, related languages resemble each other more than unrelated languages, and that the Japonic languages resemble each other more than the Germanic languages do. Nevertheless, English resembles all of the Japonic languages more than it resembles Swedish. Together, these findings show that the rate and extent of semantic change can differ between language families, indicating the existence of lineage-specific developments on top of universal cross-linguistic constraints.
  • Huizeling, E., Alday, P. M., Peeters, D., & Hagoort, P. (2023). Combining EEG and 3D-eye-tracking to study the prediction of upcoming speech in naturalistic virtual environments: A proof of principle. Neuropsychologia, 191: 108730. doi:10.1016/j.neuropsychologia.2023.108730.

    Abstract

    EEG and eye-tracking provide complementary information when investigating language comprehension. Evidence that speech processing may be facilitated by speech prediction comes from the observation that a listener's eye gaze moves towards a referent before it is mentioned if the remainder of the spoken sentence is predictable. However, changes to the trajectory of anticipatory fixations could result from a change in prediction or an attention shift. Conversely, N400 amplitudes and concurrent spectral power provide information about the ease of word processing the moment the word is perceived. In a proof-of-principle investigation, we combined EEG and eye-tracking to study linguistic prediction in naturalistic, virtual environments. We observed increased processing, reflected in theta band power, either during verb processing - when the verb was predictive of the noun - or during noun processing - when the verb was not predictive of the noun. Alpha power was higher in response to the predictive verb and unpredictable nouns. We replicated typical effects of noun congruence but not predictability on the N400 in response to the noun. Thus, the rich visual context that accompanied speech in virtual reality influenced language processing compared to previous reports, where the visual context may have facilitated processing of unpredictable nouns. Finally, anticipatory fixations were predictive of spectral power during noun processing and the length of time fixating the target could be predicted by spectral power at verb onset, conditional on the object having been fixated. Overall, we show that combining EEG and eye-tracking provides a promising new method to answer novel research questions about the prediction of upcoming linguistic input, for example, regarding the role of extralinguistic cues in prediction during language comprehension.
  • Hulten, A., Schoffelen, J.-M., Udden, J., Lam, N. H. L., & Hagoort, P. (2019). How the brain makes sense beyond the processing of single words – An MEG study. NeuroImage, 186, 586-594. doi:10.1016/j.neuroimage.2018.11.035.

    Abstract

    Human language processing involves combinatorial operations that make human communication stand out in the animal kingdom. These operations rely on a dynamic interplay between the inferior frontal and the posterior temporal cortices. Using source reconstructed magnetoencephalography, we tracked language processing in the brain, in order to investigate how individual words are interpreted when part of sentence context. The large sample size in this study (n = 68) allowed us to assess how event-related activity is associated across distinct cortical areas, by means of inter-areal co-modulation within an individual. We showed that, within 500 ms of seeing a word, the word's lexical information has been retrieved and unified with the sentence context. This does not happen in a strictly feed-forward manner, but by means of co-modulation between the left posterior temporal cortex (LPTC) and left inferior frontal cortex (LIFC), for each individual word. The co-modulation of LIFC and LPTC occurs around 400 ms after the onset of each word, across the progression of a sentence. Moreover, these core language areas are supported early on by the attentional network. The results provide a detailed description of the temporal orchestration related to single word processing in the context of ongoing language.

    Additional information

    1-s2.0-S1053811918321165-mmc1.pdf
  • Hustá, C., Dalmaijer, E., Belopolsky, A., & Mathôt, S. (2019). The pupillary light response reflects visual working memory content. Journal of Experimental Psychology: Human Perception and Performance, 45(11), 1522-1528. doi:10.1037/xhp0000689.

    Abstract

    Recent studies have shown that the pupillary light response (PLR) is modulated by higher cognitive functions, presumably through activity in visual sensory brain areas. Here we use the PLR to test the involvement of sensory areas in visual working memory (VWM). In two experiments, participants memorized either bright or dark stimuli. We found that pupils were smaller when a prestimulus cue indicated that a bright stimulus should be memorized; this reflects a covert shift of attention during encoding of items into VWM. Crucially, we obtained the same result with a poststimulus cue, which shows that internal shifts of attention within VWM affect pupil size as well. Strikingly, the effect of VWM content on pupil size was most pronounced immediately after the poststimulus cue, and then dissipated. This suggests that a shift of attention within VWM momentarily activates an "active" memory representation, but that this representation quickly transforms into a "hidden" state that does not rely on sensory areas.

    Additional information

    Supplementary_xhp0000689.docx
  • Hustá, C., Nieuwland, M. S., & Meyer, A. S. (2023). Effects of picture naming and categorization on concurrent comprehension: Evidence from the N400. Collabra: Psychology, 9(1): 88129. doi:10.1525/collabra.88129.

    Abstract

    n conversations, interlocutors concurrently perform two related processes: speech comprehension and speech planning. We investigated effects of speech planning on comprehension using EEG. Dutch speakers listened to sentences that ended with expected or unexpected target words. In addition, a picture was presented two seconds after target onset (Experiment 1) or 50 ms before target onset (Experiment 2). Participants’ task was to name the picture or to stay quiet depending on the picture category. In Experiment 1, we found a strong N400 effect in response to unexpected compared to expected target words. Importantly, this N400 effect was reduced in Experiment 2 compared to Experiment 1. Unexpectedly, the N400 effect was not smaller in the naming compared to categorization condition. This indicates that conceptual preparation or the decision whether to speak (taking place in both task conditions of Experiment 2) rather than processes specific to word planning interfere with comprehension.
  • Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2019). How in-group bias influences source memory for words learned from in-group and out-group speakers. Frontiers in Human Neuroscience, 13: 308. doi:10.3389/fnhum.2019.00308.

    Abstract

    Individuals rapidly extract information about others’ social identity, including whether or not they belong to their in-group. Group membership status has been shown to affect how attentively people encode information conveyed by those others. These findings are highly relevant for the field of psycholinguistics where there exists an open debate on how words are represented in the mental lexicon and how abstract or context-specific these representations are. Here, we used a novel word learning paradigm to test our proposal that the group membership status of speakers also affects how speaker-specific representations of novel words are. Participants learned new words from speakers who either attended their own university (in-group speakers) or did not (out-group speakers) and performed a task to measure their individual in-group bias. Then, their source memory of the new words was tested in a recognition test to probe the speaker-specific content of the novel lexical representations and assess how it related to individual in-group biases. We found that speaker group membership and participants’ in-group bias affected participants’ decision biases. The stronger the in-group bias, the more cautious participants were in their decisions. This was particularly applied to in-group related decisions. These findings indicate that social biases can influence recognition threshold. Taking a broader scope, defining how information is represented is a topic of great overlap between the fields of memory and psycholinguistics. Nevertheless, researchers from these fields tend to stay within the theoretical and methodological borders of their own field, missing the chance to deepen their understanding of phenomena that are of common interest. Here we show how methodologies developed in the memory field can be implemented in language research to shed light on an important theoretical issue that relates to the composition of lexical representations.

    Additional information

    Supplementary material
  • IJzerman, H., Gallucci, M., Pouw, W., Weiβgerber, S. C., Van Doesum, N. J., & Williams, K. D. (2012). Cold-blooded loneliness: Social exclusion leads to lower skin temperatures. Acta Psychologica, 140(3), 283-288. doi:10.1016/j.actpsy.2012.05.002.

    Abstract

    Being ostracized or excluded, even briefly and by strangers, is painful and threatens fundamental needs. Recent work by Zhong and Leonardelli (2008) found that excluded individuals perceive the room as cooler and that they desire warmer drinks. A perspective that many rely on in embodiment is the theoretical idea that people use metaphorical associations to understand social exclusion (see Landau, Meier, & Keefer, 2010). We suggest that people feel colder because they are colder. The results strongly support the idea that more complex metaphorical understandings of social relations are scaffolded onto literal changes in bodily temperature: Being excluded in an online ball tossing game leads to lower finger temperatures (Study 1), while the negative affect typically experienced after such social exclusion is alleviated after holding a cup of warm tea (Study 2). The authors discuss further implications for the interaction between body and social relations specifically, and for basic and cognitive systems in general.
  • Ikram, M. A., Fornage, M., Smith, A. V., Seshadri, S., Schmidt, R., Debette, S., Vrooman, H. A., Sigurdsson, S., Ropele, S., Taal, H. R., Mook-Kanamori, D. O., Coker, L. H., Longstreth, W. T., Niessen, W. J., DeStefano, A. L., Beiser, A., Zijdenbos, A. P., Struchalin, M., Jack, C. R., Rivadeneira, F. and 37 moreIkram, M. A., Fornage, M., Smith, A. V., Seshadri, S., Schmidt, R., Debette, S., Vrooman, H. A., Sigurdsson, S., Ropele, S., Taal, H. R., Mook-Kanamori, D. O., Coker, L. H., Longstreth, W. T., Niessen, W. J., DeStefano, A. L., Beiser, A., Zijdenbos, A. P., Struchalin, M., Jack, C. R., Rivadeneira, F., Uitterlinden, A. G., Knopman, D. S., Hartikainen, A.-L., Pennell, C. E., Thiering, E., Steegers, E. A. P., Hakonarson, H., Heinrich, J., Palmer, L. J., Jarvelin, M.-R., McCarthy, M. I., Grant, S. F. A., St Pourcain, B., Timpson, N. J., Smith, G. D., Sovio, U., Nalls, M. A., Au, R., Hofman, A., Gudnason, H., van der Lugt, A., Harris, T. B., Meeks, W. M., Vernooij, M. W., van Buchem, M. A., Catellier, D., Jaddoe, V. W. V., Gudnason, V., Windham, B. G., Wolf, P. A., van Duijn, C. M., Mosley, T. H., Schmidt, H., Launer, L. J., Breteler, M. M. B., DeCarli, C., Consortiumthe Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) Consortium, & Early Growth Genetics (EGG) Consortium (2012). Common variants at 6q22 and 17q21 are associated with intracranial volume. Nature Genetics, 44(5), 539-544. doi:10.1038/ng.2245.

    Abstract

    During aging, intracranial volume remains unchanged and represents maximally attained brain size, while various interacting biological phenomena lead to brain volume loss. Consequently, intracranial volume and brain volume in late life reflect different genetic influences. Our genome-wide association study (GWAS) in 8,175 community-dwelling elderly persons did not reveal any associations at genome-wide significance (P < 5 × 10(-8)) for brain volume. In contrast, intracranial volume was significantly associated with two loci: rs4273712 (P = 3.4 × 10(-11)), a known height-associated locus on chromosome 6q22, and rs9915547 (P = 1.5 × 10(-12)), localized to the inversion on chromosome 17q21. We replicated the associations of these loci with intracranial volume in a separate sample of 1,752 elderly persons (P = 1.1 × 10(-3) for 6q22 and 1.2 × 10(-3) for 17q21). Furthermore, we also found suggestive associations of the 17q21 locus with head circumference in 10,768 children (mean age of 14.5 months). Our data identify two loci associated with head size, with the inversion at 17q21 also likely to be involved in attaining maximal brain size.
  • Indefrey, P. (2006). A meta-analysis of hemodynamic studies on first and second language processing: Which suggested differences can we trust and what do they mean? Language Learning, 56(suppl. 1), 279-304. doi:10.1111/j.1467-9922.2006.00365.x.

    Abstract

    This article presents the results of a meta-analysis of 30 hemodynamic experiments comparing first language (L1) and second language (L2) processing in a range of tasks. The results suggest that reliably stronger activation during L2 processing is found (a) only for task-specific subgroups of L2 speakers and (b) within some, but not all regions that are also typically activated in native language processing. A tentative interpretation based on the functional roles of frontal and temporal regions is suggested.
  • Indefrey, P., & Gullberg, M. (2006). Introduction. Language Learning, 56(suppl. 1), 1-8. doi:10.1111/j.1467-9922.2006.00352.x.

    Abstract

    This volume is a harvest of articles from the first conference in a series on the cognitive neuroscience of language. The first conference focused on the cognitive neuroscience of second language acquisition (henceforth SLA). It brought together experts from as diverse fields as second language acquisition, bilingualism, cognitive neuroscience, and neuroanatomy. The articles and discussion articles presented here illustrate state-of-the-art findings and represent a wide range of theoretical approaches to classic as well as newer SLA issues. The theoretical themes cover age effects in SLA related to the so-called Critical Period Hypothesis and issues of ultimate attainment and focus both on age effects pertaining to childhood and to aging. Other familiar SLA topics are the effects of proficiency and learning as well as issues concerning the difference between the end product and the process that yields that product, here discussed in terms of convergence and degeneracy. A topic more related to actual usage of a second language once acquired concerns how multilingual speakers control and regulate their two languages.
  • Indefrey, P. (2006). It is time to work toward explicit processing models for native and second language speakers. Journal of Applied Psycholinguistics, 27(1), 66-69. doi:10.1017/S0142716406060103.
  • Ioana, M., Ferwerda, B., Farjadian, S., Ioana, L., Ghaderi, A., Oosting, M., Joosten, L. A., Van der Meer, J. W., Romeo, G., Luiselli, D., Dediu, D., & Netea, M. G. (2012). High variability of TLR4 gene in different ethnic groups of Iran. Innate Immunity, 18, 492-502. doi:10.1177/1753425911423043.

    Abstract

    Infectious diseases exert a constant evolutionary pressure on the innate immunity genes. TLR4, an important member of the Toll-like receptors family, specifically recognizes conserved structures of various infectious pathogens. Two functional TLR4 polymorphisms, Asp299Gly and Thr399Ile, modulate innate host defense against infections, and their prevalence between various populations has been proposed to be influenced by local infectious pressures. If this assumption is true, strong local infectious pressures would lead to a homogeneous pattern of these ancient TLR4 polymorphisms in geographically close populations, while a weak selection or genetic drift may result in a diverse pattern. We evaluated TLR4 polymorphisms in 15 ethnic groups of Iran, to assess whether infections exerted selective pressures on different haplotypes containing these variants. The Iranian subpopulations displayed a heterogeneous pattern of TLR4 polymorphisms, comprising various percentages of Asp299Gly and Thr399Ile alone or in combination. The Iranian sample as a whole showed an intermediate mixed pattern when compared with commonly found patterns in Africa, Europe, Eastern Asia and Americas. These findings suggest a weak or absent selection pressure on TLR4 polymorphisms in the Middle-East, that does not support the assumption of an important role of these polymorphisms in the host defence against local pathogens.
  • Ioumpa, K., Graham, S. A., Clausner, T., Fisher, S. E., Van Lier, R., & Van Leeuwen, T. M. (2019). Enhanced self-reported affect and prosocial behaviour without differential physiological responses in mirror-sensory synaesthesia. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190395. doi:10.1098/rstb.2019.0395.

    Abstract

    Mirror-sensory synaesthetes mirror the pain or touch that they observe in other people on their own bodies. This type of synaesthesia has been associated with enhanced empathy. We investigated whether the enhanced empathy of people with mirror-sensory synesthesia influences the experience of situations involving touch or pain and whether it affects their prosocial decision making. Mirror-sensory synaesthetes (N = 18, all female), verified with a touch-interference paradigm, were compared with a similar number of age-matched control individuals (all female). Participants viewed arousing images depicting pain or touch; we recorded subjective valence and arousal ratings, and physiological responses, hypothesizing more extreme reactions in synaesthetes. The subjective impact of positive and negative images was stronger in synaesthetes than in control participants; the stronger the reported synaesthesia, the more extreme the picture ratings. However, there was no evidence for differential physiological or hormonal responses to arousing pictures. Prosocial decision making was assessed with an economic game assessing altruism, in which participants had to divide money between themselves and a second player. Mirror-sensory synaesthetes donated more money than non-synaesthetes, showing enhanced prosocial behaviour, and also scored higher on the Interpersonal Reactivity Index as a measure of empathy. Our study demonstrates the subjective impact of mirror-sensory synaesthesia and its stimulating influence on prosocial behaviour.

    Files private

    Request files
  • Iyer, S., Sam, F. S., DiPrimio, N., Preston, G., Verheijen, J., Murthy, K., Parton, Z., Tsang, H., Lao, J., Morava, E., & Perlstein, E. O. (2019). Repurposing the aldose reductase inhibitor and diabetic neuropathy drug epalrestat for the congenital disorder of glycosylation PMM2-CDG. Disease models & mechanisms, 12(11): UNSP dmm040584. doi:10.1242/dmm.040584.

    Abstract

    Phosphomannomutase 2 deficiency, or PMM2-CDG, is the most common congenital disorder of glycosylation and affects over 1000 patients globally. There are no approved drugs that treat the symptoms or root cause of PMM2-CDG. To identify clinically actionable compounds that boost human PMM2 enzyme function, we performed a multispecies drug repurposing screen using a novel worm model of PMM2-CDG, followed by PMM2 enzyme functional studies in PMM2-CDG patient fibroblasts. Drug repurposing candidates from this study, and drug repurposing candidates from a previously published study using yeast models of PMM2-CDG, were tested for their effect on human PMM2 enzyme activity in PMM2-CDG fibroblasts. Of the 20 repurposing candidates discovered in the worm-based phenotypic screen, 12 were plant-based polyphenols. Insights from structure-activity relationships revealed epalrestat, the only antidiabetic aldose reductase inhibitor approved for use in humans, as a first-in-class PMM2 enzyme activator. Epalrestat increased PMM2 enzymatic activity in four PMM2-CDG patient fibroblast lines with genotypes R141H/F119L, R141H/E139K, R141H/N216I and R141H/F183S. PMM2 enzyme activity gains ranged from 30% to 400% over baseline, depending on genotype. Pharmacological inhibition of aldose reductase by epalrestat may shunt glucose from the polyol pathway to glucose-1,6-bisphosphate, which is an endogenous stabilizer and coactivator of PMM2 homodimerization. Epalrestat is a safe, oral and brain penetrant drug that was approved 27 years ago in Japan to treat diabetic neuropathy in geriatric populations. We demonstrate that epalrestat is the first small molecule activator ofPMM2 enzyme activity with the potential to treat peripheral neuropathy and correct the underlying enzyme deficiency in a majority of pediatric and adult PMM2-CDG patients.

    Additional information

    DMM040584supp.pdf
  • Jadoul, Y., & Ravignani, A. (2023). Modelling the emergence of synchrony from decentralized rhythmic interactions in animal communication. Proceedings of the Royal Society B: Biological Sciences, 290(2003). doi:10.1098/rspb.2023.0876.

    Abstract

    To communicate, an animal's strategic timing of rhythmic signals is crucial. Evolutionary, game-theoretical, and dynamical systems models can shed light on the interaction between individuals and the associated costs and benefits of signalling at a specific time. Mathematical models that study rhythmic interactions from a strategic or evolutionary perspective are rare in animal communication research. But new inspiration may come from a recent game theory model of how group synchrony emerges from local interactions of oscillatory neurons. In the study, the authors analyse when the benefit of joint synchronization outweighs the cost of individual neurons sending electrical signals to each other. They postulate there is a benefit for pairs of neurons to fire together and a cost for a neuron to communicate. The resulting model delivers a variant of a classical dynamical system, the Kuramoto model. Here, we present an accessible overview of the Kuramoto model and evolutionary game theory, and of the 'oscillatory neurons' model. We interpret the model's results and discuss the advantages and limitations of using this particular model in the context of animal rhythmic communication. Finally, we sketch potential future directions and discuss the need to further combine evolutionary dynamics, game theory and rhythmic processes in animal communication studies.
  • Jadoul, Y., Düngen, D., & Ravignani, A. (2023). PyGellermann: a Python tool to generate pseudorandom series for human and non-human animal behavioural experiments. BMC Research Notes, 16: 135. doi:10.1186/s13104-023-06396-x.

    Abstract

    Objective

    Researchers in animal cognition, psychophysics, and experimental psychology need to randomise the presentation order of trials in experimental sessions. In many paradigms, for each trial, one of two responses can be correct, and the trials need to be ordered such that the participant’s responses are a fair assessment of their performance. Specifically, in some cases, especially for low numbers of trials, randomised trial orders need to be excluded if they contain simple patterns which a participant could accidentally match and so succeed at the task without learning.
    Results

    We present and distribute a simple Python software package and tool to produce pseudorandom sequences following the Gellermann series. This series has been proposed to pre-empt simple heuristics and avoid inflated performance rates via false positive responses. Our tool allows users to choose the sequence length and outputs a .csv file with newly and randomly generated sequences. This allows behavioural researchers to produce, in a few seconds, a pseudorandom sequence for their specific experiment. PyGellermann is available at https://github.com/YannickJadoul/PyGellermann.
  • Jaeger, E., Leedham, S., Lewis, A., Segditsas, S., Becker, M., Rodenas-Cuadrado, P., Davis, H., Kaur, K., Heinimann, K., Howarth, K., East, J., Taylor, J., Thomas, H., & Tomlinson, I. (2012). Hereditary mixed polyposis syndrome is caused by a 40-kb upstream duplication that leads to increased and ectopic expression of the BMP antagonist GREM1. Nature Genetics, 44, 699-703. doi:10.1038/ng.2263.

    Abstract

    Hereditary mixed polyposis syndrome (HMPS) is characterized by apparent autosomal dominant inheritance of multiple types of colorectal polyp, with colorectal carcinoma occurring in a high proportion of affected individuals. Here, we use genetic mapping, copy-number analysis, exclusion of mutations by high-throughput sequencing, gene expression analysis and functional assays to show that HMPS is caused by a duplication spanning the 3' end of the SCG5 gene and a region upstream of the GREM1 locus. This unusual mutation is associated with increased allele-specific GREM1 expression. Whereas GREM1 is expressed in intestinal subepithelial myofibroblasts in controls, GREM1 is predominantly expressed in the epithelium of the large bowel in individuals with HMPS. The HMPS duplication contains predicted enhancer elements; some of these interact with the GREM1 promoter and can drive gene expression in vitro. Increased GREM1 expression is predicted to cause reduced bone morphogenetic protein (BMP) pathway activity, a mechanism that also underlies tumorigenesis in juvenile polyposis of the large bowel.
  • Jago, L. S., Alcock, K., Meints, K., Pine, J. M., & Rowland, C. F. (2023). Language outcomes from the UK-CDI Project: Can risk factors, vocabulary skills and gesture scores in infancy predict later language disorders or concern for language development? Frontiers in Psychology, 14: 1167810. doi:10.3389/fpsyg.2023.1167810.

    Abstract

    At the group level, children exposed to certain health and demographic risk factors, and who have delayed language in early childhood are, more likely to have language problems later in childhood. However, it is unclear whether we can use these risk factors to predict whether an individual child is likely to develop problems with language (e.g., be diagnosed with a developmental language disorder). We tested this in a sample of 146 children who took part in the UK-CDI norming project. When the children were 15–18 months old, 1,210 British parents completed: (a) the UK-CDI (a detailed assessment of vocabulary and gesture use) and (b) the Family Questionnaire (questions about health and demographic risk factors). When the children were between 4 and 6  years, 146 of the same parents completed a short questionnaire that assessed (a) whether children had been diagnosed with a disability that was likely to affect language proficiency (e.g., developmental disability, language disorder, hearing impairment), but (b) also yielded a broader measure: whether the child’s language had raised any concern, either by a parent or professional. Discriminant function analyses were used to assess whether we could use different combinations of 10 risk factors, together with early vocabulary and gesture scores, to identify children (a) who had developed a language-related disability by the age of 4–6 years (20 children, 13.70% of the sample) or (b) for whom concern about language had been expressed (49 children; 33.56%). The overall accuracy of the models, and the specificity scores were high, indicating that the measures correctly identified those children without a language-related disability and whose language was not of concern. However, sensitivity scores were low, indicating that the models could not identify those children who were diagnosed with a language-related disability or whose language was of concern. Several exploratory analyses were carried out to analyse these results further. Overall, the results suggest that it is difficult to use parent reports of early risk factors and language in the first 2 years of life to predict which children are likely to be diagnosed with a language-related disability. Possible reasons for this are discussed.

    Additional information

    follow up questionnaire table S1
  • Janse, E. (2006). Auditieve woordherkenning bij afasie: Waarneming van mismatch items. Afasiologie, 28(4), 64-67.
  • Janse, E. (2012). A non-auditory measure of interference predicts distraction by competing speech in older adults. Aging, Neuropsychology and Cognition, 19, 741-758. doi:10.1080/13825585.2011.652590.

    Abstract

    In this study, older adults monitored for pre-assigned target sounds in a target talker's speech in a quiet (no noise) condition and in a condition with competing-talker noise. The question was to which extent the impact of the competing-talker noise on performance could be predicted from individual hearing loss and from a cognitive measure of inhibitory abilities, i.e., a measure of Stroop interference. The results showed that the non-auditory measure of Stroop interference predicted the impact of distraction on performance, over and above the effect of hearing loss. This suggests that individual differences in inhibitory abilities among older adults relate to susceptibility to distracting speech.
  • Janse, I., Bok, J., Hamidjaja, R. A., Hodemaekers, H. M., & van Rotterdam, B. J. (2012). Development and comparison of two assay formats for parallel detection of four biothreat pathogens by using suspension microarrays. PLoS One, 7(2), e31958. doi:10.1371/journal.pone.0031958.

    Abstract

    Microarrays provide a powerful analytical tool for the simultaneous detection of multiple pathogens. We developed diagnostic suspension microarrays for sensitive and specific detection of the biothreat pathogens Bacillus anthracis, Yersinia pestis, Francisella tularensis and Coxiella burnetii. Two assay chemistries for amplification and labeling were developed, one method using direct hybridization and the other using target-specific primer extension, combined with hybridization to universal arrays. Asymmetric PCR products for both assay chemistries were produced by using a multiplex asymmetric PCR amplifying 16 DNA signatures (16-plex). The performances of both assay chemistries were compared and their advantages and disadvantages are discussed. The developed microarrays detected multiple signature sequences and an internal control which made it possible to confidently identify the targeted pathogens and assess their virulence potential. The microarrays were highly specific and detected various strains of the targeted pathogens. Detection limits for the different pathogen signatures were similar or slightly higher compared to real-time PCR. Probit analysis showed that even a few genomic copies could be detected with 95% confidence. The microarrays detected DNA from different pathogens mixed in different ratios and from spiked or naturally contaminated samples. The assays that were developed have a potential for application in surveillance and diagnostics. © 2012 Janse et al.
  • Janse, E. (2006). Lexical competition effects in aphasia: Deactivation of lexical candidates in spoken word processing. Brain and Language, 97, 1-11. doi:10.1016/j.bandl.2005.06.011.

    Abstract

    Research has shown that Broca’s and Wernicke’s aphasic patients show different impairments in auditory lexical processing. The results of an experiment with form-overlapping primes showed an inhibitory effect of form-overlap for control adults and a weak inhibition trend for Broca’s aphasic patients, but a facilitatory effect of form-overlap was found for Wernicke’s aphasic participants. This suggests that Wernicke’s aphasic patients are mainly impaired in suppression of once-activated word candidates and selection of one winning candidate, which may be related to their problems in auditory language comprehension.
  • Janse, E., & Adank, P. (2012). Predicting foreign-accent adaptation in older adults. Quarterly Journal of Experimental Psychology, 65, 1563-1585. doi:10.1080/17470218.2012.658822.

    Abstract

    We investigated comprehension of and adaptation to speech in an unfamiliar accent in older adults. Participants performed a speeded sentence verification task for accented sentences: one group upon auditory-only presentation, and the other group upon audiovisual presentation. Our questions were whether audiovisual presentation would facilitate adaptation to the novel accent, and which cognitive and linguistic measures would predict adaptation. Participants were therefore tested on a range of background tests: hearing acuity, auditory verbal short-term memory, working memory, attention-switching control, selective attention, and vocabulary knowledge. Both auditory-only and audiovisual groups showed improved accuracy and decreasing response times over the course of the experiment, effectively showing accent adaptation. Even though the total amount of improvement was similar for the auditory-only and audiovisual groups, initial rate of adaptation was faster in the audiovisual group. Hearing sensitivity and short-term and working memory measures were associated with efficient processing of the novel accent. Analysis of the relationship between accent comprehension and the background tests revealed furthermore that selective attention and vocabulary size predicted the amount of adaptation over the course of the experiment. These results suggest that vocabulary knowledge and attentional abilities facilitate the attention-shifting strategies proposed to be required for perceptual learning.
  • Janssen, C., Segers, E., McQueen, J. M., & Verhoeven, L. (2019). Comparing effects of instruction on word meaning and word form on early literacy abilities in kindergarten. Early Education and Development, 30(3), 375-399. doi:10.1080/10409289.2018.1547563.

    Abstract

    Research Findings: The present study compared effects of explicit instruction on and practice with the phonological form of words (form-focused instruction) versus explicit instruction on and practice with the meaning of words (meaning-focused instruction). Instruction was given via interactive storybook reading in the kindergarten classroom of children learning Dutch. We asked whether the 2 types of instruction had different effects on vocabulary development and 2 precursors of reading ability—phonological awareness and letter knowledge—and we examined effects on these measures of the ability to learn new words with minimal acoustic-phonetic differences. Learners showed similar receptive target-word vocabulary gain after both types of instruction, but learners who received form-focused vocabulary instruction showed more gain in semantic knowledge of target vocabulary, phonological awareness, and letter knowledge than learners who received meaning-focused vocabulary instruction. Level of ability to learn pairs of words with minimal acoustic-phonetic differences predicted gain in semantic knowledge of target vocabulary and in letter knowledge in the form-focused instruction group only. Practice or Policy: A focus on the form of words during instruction appears to have benefits for young children learning vocabulary.
  • Janssen, R., Moisik, S. R., & Dediu, D. (2019). The effects of larynx height on vowel production are mitigated by the active control of articulators. Journal of Phonetics, 74, 1-17. doi:10.1016/j.wocn.2019.02.002.

    Abstract

    The influence of larynx position on vowel articulation is an important topic in understanding speech production, the present-day distribution of linguistic diversity and the evolution of speech and language in our lineage. We introduce here a realistic computer model of the vocal tract, constructed from actual human MRI data, which can learn, using machine learning techniques, to control the articulators in such a way as to produce speech sounds matching as closely as possible to a given set of target vowels. We systematically control the vertical position of the larynx and we quantify the differences between the target and produced vowels for each such position across multiple replications. We report that, indeed, larynx height does affect the accuracy of reproducing the target vowels and the distinctness of the produced vowel system, that there is a “sweet spot” of larynx positions that are optimal for vowel production, but that nevertheless, even extreme larynx positions do not result in a collapsed or heavily distorted vowel space that would make speech unintelligible. Together with other lines of evidence, our results support the view that the vowel space of human languages is influenced by our larynx position, but that other positions of the larynx may also be fully compatible with speech.

    Additional information

    Research Data via Github
  • Janzen, G. (2006). Memory for object location and route direction in virtual large-scale space. Ouarterly Journal of Experimental Psychology, 59(3), 493-508. doi:10.1080/02724980443000746.

    Abstract

    In everyday life people have to deal with tasks such as finding a novel path to a certain goal location, finding one’s way back, finding a short cut, or making a detour. In all of these tasks people acquire route knowledge. For finding the same way back they have to remember locations of objects like buildings and additionally direction changes. In three experiments using recognition tasks as well as conscious and unconscious spatial priming paradigms memory processes underlying wayfinding behaviour were investigated. Participants learned a route through a virtual environment with objects either placed at intersections (i.e., decision points) where another route could be chosen or placed along the route (non-decision points). Analyses indicate first that objects placed at decision points are recognized faster than other objects. Second, they indicate that the direction in which a route is travelled is represented only at locations that are relevant for wayfinding (e.g., decision points). The results point out the efficient way in which memory for object location and memory for route direction interact.
  • Janzen, G., Haun, D. B. M., & Levinson, S. C. (2012). Tracking down abstract linguistic meaning: Neural correlates of spatial frame of reference ambiguities in language. PLoS One, 7(2), e30657. doi:10.1371/journal.pone.0030657.

    Abstract

    This functional magnetic resonance imaging (fMRI) study investigates a crucial parameter in spatial description, namely variants in the frame of reference chosen. Two frames of reference are available in European languages for the description of small-scale assemblages, namely the intrinsic (or object-oriented) frame and the relative (or egocentric) frame. We showed participants a sentence such as “the ball is in front of the man”, ambiguous between the two frames, and then a picture of a scene with a ball and a man – participants had to respond by indicating whether the picture did or did not match the sentence. There were two blocks, in which we induced each frame of reference by feedback. Thus for the crucial test items, participants saw exactly the same sentence and the same picture but now from one perspective, now the other. Using this method, we were able to precisely pinpoint the pattern of neural activation associated with each linguistic interpretation of the ambiguity, while holding the perceptual stimuli constant. Increased brain activity in bilateral parahippocampal gyrus was associated with the intrinsic frame of reference whereas increased activity in the right superior frontal gyrus and in the parietal lobe was observed for the relative frame of reference. The study is among the few to show a distinctive pattern of neural activation for an abstract yet specific semantic parameter in language. It shows with special clarity the nature of the neural substrate supporting each frame of spatial reference
  • Jasmin, K., & Casasanto, D. (2012). The QWERTY Effect: How typing shapes the meanings of words. Psychonomic Bulletin & Review, 19, 499-504. doi:10.3758/s13423-012-0229-7.

    Abstract

    The QWERTY keyboard mediates communication for millions of language users. Here, we investigated whether differences in the way words are typed correspond to differences in their meanings. Some words are spelled with more letters on the right side of the keyboard and others with more letters on the left. In three experiments, we tested whether asymmetries in the way people interact with keys on the right and left of the keyboard influence their evaluations of the emotional valence of the words. We found the predicted relationship between emotional valence and QWERTY key position across three languages (English, Spanish, and Dutch). Words with more right-side letters were rated as more positive in valence, on average, than words with more left-side letters: the QWERTY effect. This effect was strongest in new words coined after QWERTY was invented and was also found in pseudowords. Although these data are correlational, the discovery of a similar pattern across languages, which was strongest in neologisms, suggests that the QWERTY keyboard is shaping the meanings of words as people filter language through their fingers. Widespread typing introduces a new mechanism by which semanntic changes in language can arise.
  • Jepma, M., Verdonschot, R. G., Van Steenbergen, H., Rombouts, S. A. R. B., & Nieuwenhuis, S. (2012). Neural mechanisms underlying the induction and relief of perceptual curiosity. Frontiers in Behavioral Neuroscience, 6: 5. doi:10.3389/fnbeh.2012.00005.

    Abstract

    Curiosity is one of the most basic biological drives in both animals and humans, and has been identified as a key motive for learning and discovery. Despite the importance of curiosity and related behaviors, the topic has been largely neglected in human neuroscience; hence little is known about the neurobiological mechanisms underlying curiosity. We used functional magnetic resonance imaging (fMRI) to investigate what happens in our brain during the induction and subsequent relief of perceptual curiosity. Our core findings were that (1) the induction of perceptual curiosity, through the presentation of ambiguous visual input, activated the anterior insula and anterior cingulate cortex (ACC), brain regions sensitive to conflict and arousal; (2) the relief of perceptual curiosity, through visual disambiguation, activated regions of the striatum that have been related to reward processing; and (3) the relief of perceptual curiosity was associated with hippocampal activation and enhanced incidental memory. These findings provide the first demonstration of the neural basis of human perceptual curiosity. Our results provide neurobiological support for a classic psychological theory of curiosity, which holds that curiosity is an aversive condition of increased arousal whose termination is rewarding and facilitates memory.
  • Jesse, A., & Janse, E. (2012). Audiovisual benefit for recognition of speech presented with single-talker noise in older listeners. Language and Cognitive Processes, 27(7/8), 1167-1191. doi:10.1080/01690965.2011.620335.

    Abstract

    Older listeners are more affected than younger listeners in their recognition of speech in adverse conditions, such as when they also hear a single-competing speaker. In the present study, we investigated with a speeded response task whether older listeners with various degrees of hearing loss benefit under such conditions from also seeing the speaker they intend to listen to. We also tested, at the same time, whether older adults need postperceptual processing to obtain an audiovisual benefit. When tested in a phoneme-monitoring task with single-talker noise present, older (and younger) listeners detected target phonemes more reliably and more rapidly in meaningful sentences uttered by the target speaker when they also saw the target speaker. This suggests that older adults processed audiovisual speech rapidly and efficiently enough to benefit already during spoken sentence processing. Audiovisual benefits for older adults were similar in size to those observed for younger adults in terms of response latencies, but smaller for detection accuracy. Older adults with more hearing loss showed larger audiovisual benefits. Attentional abilities predicted the size of audiovisual response time benefits in both age groups. Audiovisual benefits were found in both age groups when monitoring for the visually highly distinct phoneme /p/ and when monitoring for the visually less distinct phoneme /k/. Visual speech thus provides segmental information about the target phoneme, but also provides more global contextual information that helps both older and younger adults in this adverse listening situation.
  • Jesse, A., & Johnson, E. K. (2012). Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution. Journal of Experimental Psychology: Human Perception and Performance, 38, 1567-1581. doi:10.1037/a0027921.

    Abstract

    Using a referent detection paradigm, we examined whether listeners can determine the object speakers are referring to by using the temporal alignment between the motion speakers impose on objects and their labeling utterances. Stimuli were created by videotaping speakers labeling a novel creature. Without being explicitly instructed to do so, speakers moved the creature during labeling. Trajectories of these motions were used to animate photographs of the creature. Participants in subsequent perception studies heard these labeling utterances while seeing side-by-side animations of two identical creatures in which only the target creature moved as originally intended by the speaker. Using the cross-modal temporal relationship between speech and referent motion, participants identified which creature the speaker was labeling, even when the labeling utterances were low-pass filtered to remove their semantic content or replaced by tone analogues. However, when the prosodic structure was eliminated by reversing the speech signal, participants no longer detected the referent as readily. These results provide strong support for a prosodic cross-modal alignment hypothesis. Speakers produce a perceptible link between the motion they impose upon a referent and the prosodic structure of their speech, and listeners readily use this prosodic cross-modal relationship to resolve referential ambiguity in word-learning situations.
  • Jiang, J., Dai, B., Peng, D., Zhu, C., Liu, L., & Lu, C. (2012). Neural synchronization during face-to-face communication. Journal of Neuroscience, 32(45), 16064-16069. doi:10.1523/JNEUROSCI.2926-12.2012.

    Abstract

    Although the human brain may have evolutionarily adapted to face-to-face communication, other modes of communication, e.g., telephone and e-mail, increasingly dominate our modern daily life. This study examined the neural difference between face-to-face communication and other types of communication by simultaneously measuring two brains using a hyperscanning approach. The results showed a significant increase in the neural synchronization in the left inferior frontal cortex during a face-to-face dialog between partners but none during a back-to-back dialog, a face-to-face monologue, or a back-to-back monologue. Moreover, the neural synchronization between partners during the face-to-face dialog resulted primarily from the direct interactions between the partners, including multimodal sensory information integration and turn-taking behavior. The communicating behavior during the face-to-face dialog could be predicted accurately based on the neural synchronization level. These results suggest that face-to-face communication, particularly dialog, has special neural features that other types of communication do not have and that the neural synchronization between partners may underlie successful face-to-face communication.
  • Jin, H., Wang, Q., Yang, Y.-F., Zhang, H., Gao, M. (., Jin, S., Chen, Y. (., Xu, T., Zheng, Y.-R., Chen, J., Xiao, Q., Yang, J., Wang, X., Geng, H., Ge, J., Wang, W.-W., Chen, X., Zhang, L., Zuo, X.-N., & Chuan-Peng, H. (2023). The Chinese Open Science Network (COSN): Building an open science community from scratch. Advances in Methods and Practices in Psychological Science, 6(1): 10.1177/25152459221144986. doi:10.1177/25152459221144986.

    Abstract

    Open Science is becoming a mainstream scientific ideology in psychology and related fields. However, researchers, especially early-career researchers (ECRs) in developing countries, are facing significant hurdles in engaging in Open Science and moving it forward. In China, various societal and cultural factors discourage ECRs from participating in Open Science, such as the lack of dedicated communication channels and the norm of modesty. To make the voice of Open Science heard by Chinese-speaking ECRs and scholars at large, the Chinese Open Science Network (COSN) was initiated in 2016. With its core values being grassroots-oriented, diversity, and inclusivity, COSN has grown from a small Open Science interest group to a recognized network both in the Chinese-speaking research community and the international Open Science community. So far, COSN has organized three in-person workshops, 12 tutorials, 48 talks, and 55 journal club sessions and translated 15 Open Science-related articles and blogs from English to Chinese. Currently, the main social media account of COSN (i.e., the WeChat Official Account) has more than 23,000 subscribers, and more than 1,000 researchers/students actively participate in the discussions on Open Science. In this article, we share our experience in building such a network to encourage ECRs in developing countries to start their own Open Science initiatives and engage in the global Open Science movement. We foresee great collaborative efforts of COSN together with all other local and international networks to further accelerate the Open Science movement.
  • Jodzio, A., Piai, V., Verhagen, L., Cameron, I., & Indefrey, P. (2023). Validity of chronometric TMS for probing the time-course of word production: A modified replication. Cerebral Cortex, 33(12), 7816-7829. doi:10.1093/cercor/bhad081.

    Abstract

    In the present study, we used chronometric TMS to probe the time-course of 3 brain regions during a picture naming task. The left inferior frontal gyrus, left posterior middle temporal gyrus, and left posterior superior temporal gyrus were all separately stimulated in 1 of 5 time-windows (225, 300, 375, 450, and 525 ms) from picture onset. We found posterior temporal areas to be causally involved in picture naming in earlier time-windows, whereas all 3 regions appear to be involved in the later time-windows. However, chronometric TMS produces nonspecific effects that may impact behavior, and furthermore, the time-course of any given process is a product of both the involved processing stages along with individual variation in the duration of each stage. We therefore extend previous work in the field by accounting for both individual variations in naming latencies and directly testing for nonspecific effects of TMS. Our findings reveal that both factors influence behavioral outcomes at the group level, underlining the importance of accounting for individual variations in naming latencies, especially for late processing stages closer to articulation, and recognizing the presence of nonspecific effects of TMS. The paper advances key considerations and avenues for future work using chronometric TMS to study overt production.
  • Jones, S., Nyberg, L., Sandblom, J., Stigsdotter Neely, A., Ingvar, M., Petersson, K. M., & Bäckman, L. (2006). Cognitive and neural plasticity in aging: General and task-specific limitations. Neuroscience and Biobehavioral Reviews, 30(6), 864-871. doi:10.1016/j.neubiorev.2006.06.012.

    Abstract

    There is evidence for cognitive as well as neural plasticity across the adult life span, although aging is associated with certain constraints on plasticity. In the current paper, we argue that the age-related reduction in cognitive plasticity may be due to (a) deficits in general processing resources, and (b) failure to engage in task-relevant cognitive operations. Memory-training research suggests that age-related processing deficits (e.g., executive functions, speed) hinder older adults from utilizing mnemonic techniques as efficiently as the young, and that this age difference is reflected by diminished frontal activity during mnemonic use. Additional constraints on memory plasticity in old age are related to difficulties that are specific to the task, such as creating visual images, as well as in binding together the information to be remembered. These deficiencies are paralleled by reduced activity in occipito-parietal and medial–temporal regions, respectively. Future attempts to optimize intervention-related gains in old age should consider targeting both general processing and task-specific origins of age-associated reductions in cognitive plasticity.
  • Jordanoska, I., Kocher, A., & Bendezú-Araujo, R. (2023). Introduction special issue: Marking the truth: A cross-linguistic approach to verum. Zeitschrift für Sprachwissenschaft, 42(3), 429-442. doi:10.1515/zfs-2023-2012.

    Abstract

    This special issue focuses on the theoretical and empirical underpinnings of truth-marking. The names that have been used to refer to this phenomenon include, among others, counter-assertive focus, polar(ity) focus, verum focus, emphatic polarity or simply verum. This terminological variety is suggestive of the wide range of ideas and conceptions that characterizes this research field. This collection aims to get closer to the core of what truly constitutes verum. We want to expand the empirical base and determine the common and diverging properties of truth-marking in the languages of the world. The objective is to set a theoretical and empirical baseline for future research on verum and related phenomena.
  • Junge, C., Cutler, A., & Hagoort, P. (2012). Electrophysiological evidence of early word learning. Neuropsychologia, 50, 3702-3712. doi:10.1016/j.neuropsychologia.2012.10.012.

    Abstract

    Around their first birthday infants begin to talk, yet they comprehend words long before. This study investigated the event-related potentials (ERP) responses of nine-month-olds on basic level picture-word pairings. After a familiarization phase of six picture-word pairings per semantic category, comprehension for novel exemplars was tested in a picture-word matching paradigm. ERPs time-locked to pictures elicited a modulation of the Negative Central (Nc) component, associated with visual attention and recognition. It was attenuated by category repetition as well as by the type-token ratio of picture context. ERPs time-locked to words in the training phase became more negative with repetition (N300-600), but there was no influence of picture type-token ratio, suggesting that infants have identified the concept of each picture before a word was presented. Results from the test phase provided clear support that infants integrated word meanings with (novel) picture context. Here, infants showed different ERP responses for words that did or did not align with the picture context: a phonological mismatch (N200) and a semantic mismatch (N400). Together, results were informative of visual categorization, word recognition and word-to-world-mappings, all three crucial processes for vocabulary construction.
  • Junge, C., Kooijman, V., Hagoort, P., & Cutler, A. (2012). Rapid recognition at 10 months as a predictor of language development. Developmental Science, 15, 463-473. doi:10.1111/j.1467-7687.2012.1144.x.

    Abstract

    Infants’ ability to recognize words in continuous speech is vital for building a vocabulary.We here examined the amount and type
    of exposure needed for 10-month-olds to recognize words. Infants first heard a word, either embedded within an utterance or in
    isolation, then recognition was assessed by comparing event-related potentials to this word versus a word that they had not heard
    directly before. Although all 10-month-olds showed recognition responses to words first heard in isolation, not all infants showed
    such responses to words they had first heard within an utterance. Those that did succeed in the latter, harder, task, however,
    understood more words and utterances when re-tested at 12 months, and understood more words and produced more words at
    24 months, compared with those who had shown no such recognition response at 10 months. The ability to rapidly recognize the
    words in continuous utterances is clearly linked to future language development.
  • Kakimoto, N., Shimamoto, H., Kitisubkanchana, J., Tsujimoto, T., Senda, Y., Iwamoto, Y., Verdonschot, R. G., Hasegawa, Y., & Murakami, S. (2019). T2 relaxation times of the retrodiscal tissue in patients with temporomandibular joint disorders and in healthy volunteers: A comparative study. Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology, 128(3), 311-318. doi:10.1016/j.oooo.2019.02.005.

    Abstract

    Objective. The aims of this study were to compare the temporomandibular joint (TMJ) retrodiscal tissue T2 relaxation times between patients with temporomandibular disorders (TMDs) and asymptomatic volunteers and to assess the diagnostic potential of this approach.
    Study Design. Patients with TMD (n = 173) and asymptomatic volunteers (n = 17) were examined by using a 1.5-T magnetic resonance scanner. The imaging protocol consisted of oblique sagittal, T2-weighted, 8-echo fast spin echo sequences in the closed mouth position. Retrodiscal tissue T2 relaxation times were obtained. Additionally, disc location and reduction, disc configuration, joint effusion, osteoarthritis, and bone edema or osteonecrosis were classified using MRI scans. The T2 relaxation times of each group were statistically compared.
    Results. Retrodiscal tissue T2 relaxation times were significantly longer in patient groups than in asymptomatic volunteers (P < .01). T2 relaxation times were significantly longer in all of the morphologic categories. The most important variables affecting retrodiscal tissue T2 relaxation times were disc configuration, joint effusion, and osteoarthritis.
    Conclusion. Retrodiscal tissue T2 relaxation times of patients with TMD were significantly longer than those of healthy volunteers. This finding may lead to the development of a diagnostic marker to aid in the early detection of TMDs.
  • Kałamała, P., Chuderski, A., Szewczyk, J., Senderecka, M., & Wodniecka, Z. (2023). Bilingualism caught in a net: A new approach to understanding the complexity of bilingual experience. Journal of Experimental Psychology: General, 152(1), 157-174. doi:10.1037/xge0001263.

    Abstract

    The growing importance of research on bilingualism in psychology and neuroscience motivates the need for a psychometric model that can be used to understand and quantify this phenomenon. This research is the first to meet this need. We reanalyzed two data sets (N = 171 and N = 112) from relatively young adult language-unbalanced bilinguals and asked whether bilingualism is best described by the factor structure or by the network structure. The factor and network models were established on one data set and then validated on the other data set in a fully confirmatory manner. The network model provided the best fit to the data. This implies that bilingualism should be conceptualized as an emergent phenomenon arising from direct and idiosyncratic dependencies among the history of language acquisition, diverse language skills, and language-use practices. These dependencies can be reduced to neither a single universal quotient nor to some more general factors. Additional in-depth network analyses showed that the subjective perception of proficiency along with language entropy and language mixing were the most central indices of bilingualism, thus indicating that these measures can be especially sensitive to variation in the overall bilingual experience. Overall, this work highlights the great potential of psychometric network modeling to gain a more accurate description and understanding of complex (psycho)linguistic and cognitive phenomena.
  • Kamermans, K. L., Pouw, W., Mast, F. W., & Paas, F. (2019). Reinterpretation in visual imagery is possible without visual cues: A validation of previous research. Psychological Research, 83(6), 1237-1250. doi:10.1007/s00426-017-0956-5.

    Abstract

    Is visual reinterpretation of bistable figures (e.g., duck/rabbit figure) in visual imagery possible? Current consensus suggests that it is in principle possible because of converging evidence of quasi-pictorial functioning of visual imagery. Yet, studies that have directly tested and found evidence for reinterpretation in visual imagery, allow for the possibility that reinterpretation was already achieved during memorization of the figure(s). One study resolved this issue, providing evidence for reinterpretation in visual imagery (Mast and Kosslyn, Cognition 86:57-70, 2002). However, participants in that study performed reinterpretations with aid of visual cues. Hence, reinterpretation was not performed with mental imagery alone. Therefore, in this study we assessed the possibility of reinterpretation without visual support. We further explored the possible role of haptic cues to assess the multimodal nature of mental imagery. Fifty-three participants were consecutively presented three to be remembered bistable 2-D figures (reinterpretable when rotated 180 degrees), two of which were visually inspected and one was explored hapticly. After memorization of the figures, a visually bistable exemplar figure was presented to ensure understanding of the concept of visual bistability. During recall, 11 participants (out of 36; 30.6%) who did not spot bistability during memorization successfully performed reinterpretations when instructed to mentally rotate their visual image, but additional haptic cues during mental imagery did not inflate reinterpretation ability. This study validates previous findings that reinterpretation in visual imagery is possible.
  • Kamermans, K. L., Pouw, W., Fassi, L., Aslanidou, A., Paas, F., & Hostetter, A. B. (2019). The role of gesture as simulated action in reinterpretation of mental imagery. Acta Psychologica, 197, 131-142. doi:10.1016/j.actpsy.2019.05.004.

    Abstract

    In two experiments, we examined the role of gesture in reinterpreting a mental image. In Experiment 1, we found that participants gestured more about a figure they had learned through manual exploration than about a figure they had learned through vision. This supports claims that gestures emerge from the activation of perception-relevant actions during mental imagery. In Experiment 2, we investigated whether such gestures have a causal role in affecting the quality of mental imagery. Participants were randomly assigned to gesture, not gesture, or engage in a manual interference task as they attempted to reinterpret a figure they had learned through manual exploration. We found that manual interference significantly impaired participants' success on the task. Taken together, these results suggest that gestures reflect mental imaginings of interactions with a mental image and that these imaginings are critically important for mental manipulation and reinterpretation of that image. However, our results suggest that enacting the imagined movements in gesture is not critically important on this particular task.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2023). Late sign language exposure does not modulate the relation between spatial language and spatial memory in deaf children and adults. Memory & Cognition, 51, 582-600. doi:10.3758/s13421-022-01281-7.

    Abstract

    Prior work with hearing children acquiring a spoken language as their first language shows that spatial language and cognition are related systems and spatial language use predicts spatial memory. Here, we further investigate the extent of this relationship in signing deaf children and adults and ask if late sign language exposure, as well as the frequency and the type of spatial language use that might be affected by late exposure, modulate subsequent memory for spatial relations. To do so, we compared spatial language and memory of 8-year-old late-signing children (after 2 years of exposure to a sign language at the school for the deaf) and late-signing adults to their native-signing counterparts. We elicited picture descriptions of Left-Right relations in Turkish Sign Language (Türk İşaret Dili) and measured the subsequent recognition memory accuracy of the described pictures. Results showed that late-signing adults and children were similar to their native-signing counterparts in how often they encoded the spatial relation. However, late-signing adults but not children differed from their native-signing counterparts in the type of spatial language they used. However, neither late sign language exposure nor the frequency and type of spatial language use modulated spatial memory accuracy. Therefore, even though late language exposure seems to influence the type of spatial language use, this does not predict subsequent memory for spatial relations. We discuss the implications of these findings based on the theories concerning the correspondence between spatial language and cognition as related or rather independent systems.
  • Kaspi, A., Hildebrand, M. S., Jackson, V. E., Braden, R., Van Reyk, O., Howell, T., Debono, S., Lauretta, M., Morison, L., Coleman, M. J., Webster, R., Coman, D., Goel, H., Wallis, M., Dabscheck, G., Downie, L., Baker, E. K., Parry-Fielder, B., Ballard, K., Harrold, E. and 10 moreKaspi, A., Hildebrand, M. S., Jackson, V. E., Braden, R., Van Reyk, O., Howell, T., Debono, S., Lauretta, M., Morison, L., Coleman, M. J., Webster, R., Coman, D., Goel, H., Wallis, M., Dabscheck, G., Downie, L., Baker, E. K., Parry-Fielder, B., Ballard, K., Harrold, E., Ziegenfusz, S., Bennett, M. F., Robertson, E., Wang, L., Boys, A., Fisher, S. E., Amor, D. J., Scheffer, I. E., Bahlo, M., & Morgan, A. T. (2023). Genetic aetiologies for childhood speech disorder: Novel pathways co-expressed during brain development. Molecular Psychiatry, 28, 1647-1663. doi:10.1038/s41380-022-01764-8.

    Abstract

    Childhood apraxia of speech (CAS), the prototypic severe childhood speech disorder, is characterized by motor programming and planning deficits. Genetic factors make substantive contributions to CAS aetiology, with a monogenic pathogenic variant identified in a third of cases, implicating around 20 single genes to date. Here we aimed to identify molecular causation in 70 unrelated probands ascertained with CAS. We performed trio genome sequencing. Our bioinformatic analysis examined single nucleotide, indel, copy number, structural and short tandem repeat variants. We prioritised appropriate variants arising de novo or inherited that were expected to be damaging based on in silico predictions. We identified high confidence variants in 18/70 (26%) probands, almost doubling the current number of candidate genes for CAS. Three of the 18 variants affected SETBP1, SETD1A and DDX3X, thus confirming their roles in CAS, while the remaining 15 occurred in genes not previously associated with this disorder. Fifteen variants arose de novo and three were inherited. We provide further novel insights into the biology of child speech disorder, highlighting the roles of chromatin organization and gene regulation in CAS, and confirm that genes involved in CAS are co-expressed during brain development. Our findings confirm a diagnostic yield comparable to, or even higher, than other neurodevelopmental disorders with substantial de novo variant burden. Data also support the increasingly recognised overlaps between genes conferring risk for a range of neurodevelopmental disorders. Understanding the aetiological basis of CAS is critical to end the diagnostic odyssey and ensure affected individuals are poised for precision medicine trials.
  • Kaufhold, S. P., & Van Leeuwen, E. J. C. (2019). Why intergroup variation matters for understanding behaviour. Biology Letters, 15(11): 20190695. doi:10.1098/rsbl.2019.0695.

    Abstract

    Intergroup variation (IGV) refers to variation between different groups of the same species. While its existence in the behavioural realm has been expected and evidenced, the potential effects of IGV are rarely considered in studies that aim to shed light on the evolutionary origins of human socio-cognition, especially in our closest living relatives—the great apes. Here, by taking chimpanzees as a point of reference, we argue that (i) IGV could plausibly explain inconsistent research findings across numerous topics of inquiry (experimental/behavioural studies on chimpanzees), (ii) understanding the evolutionary origins of behaviour requires an accurate assessment of species' modes of behaving across different socio-ecological contexts, which necessitates a reliable estimation of variation across intraspecific groups, and (iii) IGV in the behavioural realm is increasingly likely to be expected owing to the progressive identification of non-human animal cultures. With these points, and by extrapolating from chimpanzees to generic guidelines, we aim to encourage researchers to explicitly consider IGV as an explanatory variable in future studies attempting to understand the socio-cognitive and evolutionary determinants of behaviour in group-living animals.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2012). The communicative influence of gesture and action during speech comprehension: Gestures have the upper hand [Abstract]. Abstracts of the Acoustics 2012 Hong Kong conference published in The Journal of the Acoustical Society of America, 131, 3311. doi:10.1121/1.4708385.

    Abstract

    Hand gestures combine with speech to form a single integrated system of meaning during language comprehension (Kelly et al., 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. Thirty-one participants watched videos presenting speech with gestures or manual actions on objects. The relationship between the speech and gesture/action was either complementary (e.g., “He found the answer,” while producing a calculating gesture vs. actually using a calculator) or incongruent (e.g., the same sentence paired with the incongruent gesture/action of stirring with a spoon). Participants watched the video (prime) and then responded to a written word (target) that was or was not spoken in the video prime (e.g., “found” or “cut”). ERPs were taken to the primes (time-locked to the spoken verb, e.g., “found”) and the written targets. For primes, there was a larger frontal N400 (semantic processing) to incongruent vs. congruent items for the gesture, but not action, condition. For targets, the P2 (phonemic processing) was smaller for target words following congruent vs. incongruent gesture, but not action, primes. These findings suggest that hand gestures are integrated with speech in a privileged fashion compared to manual actions on objects.
  • Kempen, G. (1991). Conjunction reduction and gapping in clause-level coordination: An inheritance-based approach. Computational Intelligence, 7, 357-360. doi:10.1111/j.1467-8640.1991.tb00406.x.
  • Kempen, G., & Harbusch, K. (2019). Mutual attraction between high-frequency verbs and clause types with finite verbs in early positions: Corpus evidence from spoken English, Dutch, and German. Language, Cognition and Neuroscience, 34(9), 1140-1151. doi:10.1080/23273798.2019.1642498.

    Abstract

    We report a hitherto unknown statistical relationship between the corpus frequency of finite verbs and their fixed linear positions (early vs. late) in finite clauses of English, Dutch, and German. Compared to the overall frequency distribution of verb lemmas in the corpora, high-frequency finite verbs are overused in main clauses, at the expense of nonfinite verbs. This finite versus nonfinite split of high-frequency verbs is basically absent from subordinate clauses. Furthermore, this “main-clause bias” (MCB) of high-frequency verbs is more prominent in German and Dutch (SOV languages) than in English (an SVO language). We attribute the MCB and its varying effect sizes to faster accessibility of high-frequency finite verbs, which (1) increases the probability for these verbs to land in clauses mandating early verb placement, and (2) boosts the activation of clause plans that assign verbs to early linear positions (in casu: clauses with SVO as opposed to SOV order).

    Additional information

    plcp_a_1642498_sm1530.pdf
  • Kempen, G., Olsthoorn, N., & Sprenger, S. (2012). Grammatical workspace sharing during language production and language comprehension: Evidence from grammatical multitasking. Language and Cognitive Processes, 27, 345-380. doi:10.1080/01690965.2010.544583.

    Abstract

    Grammatical encoding and grammatical decoding (in sentence production and comprehension, respectively) are often portrayed as independent modalities of grammatical performance that only share declarative resources: lexicon and grammar. The processing resources subserving these modalities are supposed to be distinct. In particular, one assumes the existence of two workspaces where grammatical structures are assembled and temporarily maintained—one for each modality. An alternative theory holds that the two modalities share many of their processing resources and postulates a single mechanism for the online assemblage and short-term storage of grammatical structures: a shared workspace. We report two experiments with a novel “grammatical multitasking” paradigm: the participants had to read (i.e., decode) and to paraphrase (encode) sentences presented in fragments, responding to each input fragment as fast as possible with a fragment of the paraphrase. The main finding was that grammatical constraints with respect to upcoming input that emanate from decoded sentence fragments are immediately replaced by grammatical expectations emanating from the structure of the corresponding paraphrase fragments. This evidences that the two modalities have direct access to, and operate upon, the same (i.e., token-identical) grammatical structures. This is possible only if the grammatical encoding and decoding processes command the same, shared grammatical workspace. Theoretical implications for important forms of grammatical multitasking—self-monitoring, turn-taking in dialogue, speech shadowing, and simultaneous translation—are explored.
  • Kempen, G. (1976). Syntactic constructions as retrieval plans. British Journal of Psychology, 67(2), 149-160. doi:10.1111/j.2044-8295.1976.tb01505.x.

    Abstract

    Four probe latency experiments show that the ‘constituent boundary effect’ (transitions between constituents are more difficult than within constituents) is a retrieval and not a storage phenomenon. The experimental logic used is called paraphrastic reproduction: after verbatim memorization of some sentences, subjects were instructed to reproduce them both in their original wording and in the form of sentences that, whilst preserving the original meaning, embodied different syntactic constructions. Syntactic constructions are defined as pairs which consist of a pattern of conceptual information and a syntactic scheme, i.e. a sequence of syntactic word categories and function words. For example, the sequence noun + finite intransitive main verb (‘John runs’) expresses a conceptual actor-action relationship. It is proposed that for each overlearned and simple syntactic construction there exists a retrieval plan which does the following. It searches through the long-term memory information that has been designated as the conceptual content of the utterance(s) to be produced, looking for a token of its conceptual pattern. The retrieved information is then cast into the format of its syntactic scheme. The organization of such plans is held responsible for the constituent boundary effect.
  • Kempen, G. (1984). Taaltechnologie voor het Nederlands: Vorderingen bij de bouw van een Nederlandstalig dialoog- en auteursysteem. Toegepaste Taalwetenschap in Artikelen, 19, 48-58.
  • Kempen, G., Konst, L., & De Smedt, K. (1984). Taaltechnologie voor het Nederlands: Vorderingen bij de bouw van een Nederlandstalig dialoog- en auteursysteem. Informatie, 26, 878-881.
  • Kempen, G. (1988). Preface. Acta Psychologica, 69(3), 205-206. doi:10.1016/0001-6918(88)90032-7.
  • Kendrick, K. H., Holler, J., & Levinson, S. C. (2023). Turn-taking in human face-to-face interaction is multimodal: Gaze direction and manual gestures aid the coordination of turn transitions. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210473. doi:10.1098/rstb.2021.0473.

    Abstract

    Human communicative interaction is characterized by rapid and precise turn-taking. This is achieved by an intricate system that has been elucidated in the field of conversation analysis, based largely on the study of the auditory signal. This model suggests that transitions occur at points of possible completion identified in terms of linguistic units. Despite this, considerable evidence exists that visible bodily actions including gaze and gestures also play a role. To reconcile disparate models and observations in the literature, we combine qualitative and quantitative methods to analyse turn-taking in a corpus of multimodal interaction using eye-trackers and multiple cameras. We show that transitions seem to be inhibited when a speaker averts their gaze at a point of possible turn completion, or when a speaker produces gestures which are beginning or unfinished at such points. We further show that while the direction of a speaker's gaze does not affect the speed of transitions, the production of manual gestures does: turns with gestures have faster transitions. Our findings suggest that the coordination of transitions involves not only linguistic resources but also visual gestural ones and that the transition-relevance places in turns are multimodal in nature.

    Additional information

    supplemental material
  • Kholodova, A., Peter, M., Rowland, C. F., Jacob, G., & Allen, S. E. M. (2023). Abstract priming and the lexical boost effect across development in a structurally biased language. Languages, 8: 264. doi:10.3390/languages8040264.

    Abstract

    The present study investigates the developmental trajectory of abstract representations for syntactic structures in children. In a structural priming experiment on the dative alternation in German, we primed children from three different age groups (3–4 years, 5–6 years, 7–8 years) and adults with double object datives (Dora sent Boots the rabbit) or prepositional object datives (Dora sent the rabbit to Boots). Importantly, the prepositional object structure in German is dispreferred and only rarely encountered by young children. While immediate as well as cumulative structural priming effects occurred across all age groups, these effects were strongest in the 3- to 4-year-old group and gradually decreased with increasing age. These results suggest that representations in young children are less stable than in adults and, therefore, more susceptible to adaptation both immediately and across time, presumably due to stronger surprisal. Lexical boost effects, in contrast, were not present in 3- to 4-year-olds but gradually emerged with increasing age, possibly due to limited working-memory capacity in the younger child groups.
  • Wu, Q., Kidd, E., & Goodhew, S. C. (2019). The spatial mapping of concepts in English and Mandarin. Journal of Cognitive Psychology, 31(7), 703-724. doi:10.1080/20445911.2019.1663354.

    Abstract

    English speakers have been shown to map abstract concepts in space, which occurs on both the vertical and horizontal dimensions. For example, words such as God are associated with up and right spatial locations, and words such as Satan with down and left. If the tendency to map concepts in space is a universal property of human cognition, then it is likely that such mappings may be at least partly culturally-specific, since many concepts are themselves language-specific and therefore cultural conventions. Here we investigated whether Mandarin speakers report spatial mapping of concepts, and how these mappings compare with English speakers (i.e. are words with the same meaning associated with the same spatial locations). Across two studies, results showed that both native English and Mandarin speakers reported spatial mapping of concepts, and that the distribution of mappings was highly similar for the two groups. Theoretical implications are discussed.
  • Kidd, E., Arciuli, J., Christiansen, M. H., & Smithson, M. (2023). The sources and consequences of individual differences in statistical learning for language development. Cognitive Development, 66: 101335. doi:10.1016/j.cogdev.2023.101335.

    Abstract

    Statistical learning (SL)—sensitivity to statistical regularities in the environment—has been postulated to support language development. While even young infants are capable of using distributional statistics to learn in linguistic and non-linguistic domains, efforts to measure SL at the level of the individual and link it to language proficiency in individual differences designs have been mixed, which has at least in part been attributed to problems with task reliability. In the current study we present the first prospective longitudinal study of the relationship between both non-linguistic SL (measured with visual stimuli) and linguistic SL (measured with auditory stimuli) and language in a group of English-speaking children. One-hundred and twenty-one (N = 121) children in their first two years of formal schooling (Mage = 6;1 years, Range: 5;2 – 7;2) completed tests of visual SL (VSL) and auditory SL (ASL) and several control variables at time 1. Both forms of SL were then measured every 6 months for the next 18 months, and at the final testing session (time 4) their language proficiency was measured using a standardised test. The results showed that the reliability of the SL tasks increased across the course of the study. A series of path analyses showed that both VSL and ASL independently predicted individual differences in language proficiency at time 4. The evidence is consistent with the suggestion that, when measured reliably, an observable relationship between SL and language proficiency exists. Theoretical and methodological issues are discussed.

    Additional information

    data and code
  • Kidd, E. (2006). [Review of the book Syntactic carpentry: An emergentist approach to syntax by William O'Grady]. Journal of Child Language, 33(4), 905-910. doi:10.1017/S030500090622782X.
  • Kidd, E. (2012). Implicit statistical learning is directly associated with the acquisition of syntax. Developmental Psychology, 48(1), 171-184. doi:10.1037/a0025405.

    Abstract

    This article reports on an individual differences study that investigated the role of implicit statistical learning in the acquisition of syntax in children. One hundred children ages 4 years 5 months through 6 years 11 months completed a test of implicit statistical learning, a test of explicit declarative learning, and standardized tests of verbal and nonverbal ability. They also completed a syntactic priming task, which provided a dynamic index of children's facility to detect and respond to changes in the input frequency of linguistic structure. The results showed that implicit statistical learning ability was directly associated with the long-term maintenance of the primed structure. The results constitute the first empirical demonstration of a direct association between implicit statistical learning and syntactic acquisition in children.
  • Kidd, E. (2012). Individual differences in syntactic priming in language acquisition. Applied Psycholinguistics, 33(2), 393-418. doi:10.1017/S0142716411000415.

    Abstract

    Although the syntactic priming methodology is a promising tool for language acquisition researchers, using the technique with children raises issues that are not problematic in adult research. The current paper reports on an individual differences study that addressed some of these outstanding issues. (a) Does priming purely reflect syntactic knowledge, or are other processes involved? (b) How can we explain individual differences, which are the norm rather than the exception? (c) Do priming effects in developmental populations reflect the same mechanisms thought to be responsible for priming in adults? One hundred twenty-two (N = 122) children aged 4 years, 5 months (4;5)–6;11 (mean = 5;7) completed a syntactic priming task that aimed to prime the English passive construction, in addition to standardized tests of vocabulary, grammar, and nonverbal intelligence. The results confirmed the widely held assumption that syntactic priming reflects the presence of syntactic knowledge, but not in every instance. However, they also suggested that nonlinguistic processes contribute significantly to priming. Priming was in no way related to age. Finally, the children's linguistic knowledge and nonverbal ability determined the manner in which they were primed. The results provide a clearer picture of what it means to be primed in acquisition.
  • Kidd, E., Lieven, E., & Tomasello, M. (2006). Examining the role of lexical frequency in children's acquisition of sentential complements. Cognitive Development, 21(2), 93-107. doi:10.1016/j.cogdev.2006.01.006.

    Abstract

    We present empirical data showing that the relative frequency with which a verb normally appears in a syntactic construction predicts young children's ability to remember and repeat sentences instantiating that construction. Children aged 2;10–5;8 years were asked to repeat grammatical and ungrammatical sentential complement sentences (e.g., ‘I think + S’). The sentences contained complement-taking verbs (CTVs) used with differing frequencies in children's natural speech. All children repeated sentences containing high frequency CTVs (e.g., think) more accurately than those containing low frequency CTVs (e.g., hear), and made more sophisticated corrections to ungrammatical sentences containing high frequency CTVs. The data suggest that, like adults, children are sensitive to lexico-constructional collocations. The implications for language acquisition are discussed.
  • Kim, S., Cho, T., & McQueen, J. M. (2012). Phonetic richness can outweigh prosodically-driven phonological knowledge when learning words in an artificial language. Journal of Phonetics, 40, 443-452. doi:10.1016/j.wocn.2012.02.005.

    Abstract

    How do Dutch and Korean listeners use acoustic–phonetic information when learning words in an artificial language? Dutch has a voiceless ‘unaspirated’ stop, produced with shortened Voice Onset Time (VOT) in prosodic strengthening environments (e.g., in domain-initial position and under prominence), enhancing the feature {−spread glottis}; Korean has a voiceless ‘aspirated’ stop produced with lengthened VOT in similar environments, enhancing the feature {+spread glottis}. Given this cross-linguistic difference, two competing hypotheses were tested. The phonological-superiority hypothesis predicts that Dutch and Korean listeners should utilize shortened and lengthened VOTs, respectively, as cues in artificial-language segmentation. The phonetic-superiority hypothesis predicts that both groups should take advantage of the phonetic richness of longer VOTs (i.e., their enhanced auditory–perceptual robustness). Dutch and Korean listeners learned the words of an artificial language better when word-initial stops had longer VOTs than when they had shorter VOTs. It appears that language-specific phonological knowledge can be overridden by phonetic richness in processing an unfamiliar language. Listeners nonetheless performed better when the stimuli were based on the speech of their native languages, suggesting that the use of richer phonetic information was modulated by listeners' familiarity with the stimuli.
  • Kim, A., & Lai, V. T. (2012). Rapid interactions between lexical semantic and word form analysis during word recognition in context: Evidence from ERPs. Journal of Cognitive Neuroscience, 24, 1104-1112. doi:10.1162/jocn_a_00148.

    Abstract

    We used event-related potentials (ERPs) to investigate the timecourse of interactions between lexical-semantic and sub-lexical visual word-form processing during word recognition. Participants read sentence-embedded pseudowords that orthographically resembled a contextually-supported real word (e.g., “She measured the flour so she could bake a ceke …”) or did not (e.g., “She measured the flour so she could bake a tont …”) along with nonword consonant strings (e.g., “She measured the flour so she could bake a srdt …”). Pseudowords that resembled a contextually-supported real word (“ceke”) elicited an enhanced positivity at 130 msec (P130), relative to real words (e.g., “She measured the flour so she could bake a cake …”). Pseudowords that did not resemble a plausible real word (“tont”) enhanced the N170 component, as did nonword consonant strings (“srdt”). The effect pattern shows that the visual word recognition system is, perhaps counterintuitively, more rapidly sensitive to minor than to flagrant deviations from contextually-predicted inputs. The findings are consistent with rapid interactions between lexical and sub-lexical representations during word recognition, in which rapid lexical access of a contextually-supported word (CAKE) provides top-down excitation of form features (“cake”), highlighting the anomaly of an unexpected word “ceke”.
  • Kim, N., Brehm, L., & Yoshida, M. (2019). The online processing of noun phrase ellipsis and mechanisms of antecedent retrieval. Language, Cognition and Neuroscience, 34(2), 190-213. doi:10.1080/23273798.2018.1513542.

    Abstract

    We investigate whether grammatical information is accessed in processing noun phrase ellipsis (NPE) and other anaphoric constructions. The first experiment used an agreement attraction paradigm to reveal that ungrammatical plural verbs following NPE with an antecedent containing a plural modifier (e.g. Derek’s key to the boxes … and Mary’s_ probably *are safe in the drawer) show similar facilitation to non-elided NPs. The second experiment used the same paradigm to examine a coordination construction without anaphoric elements, and the third examined anaphoric one. Agreement attraction was not observed in either experiment, suggesting that processing NPE is different from processing non-anaphoric coordination constructions or anaphoric one. Taken together, the results indicate that the parser is sensitive to grammatical distinctions at the ellipsis site where it prioritises and retrieves the head at the initial stage of processing and retrieves the local noun within the modifier phrase only when it is necessary in parsing NPE.

    Additional information

    Kim_Brehm_Yoshida_2018sup.pdf
  • Kim, S., Broersma, M., & Cho, T. (2012). The use of prosodic cues in learning new words in an unfamiliar language. Studies in Second Language Acquisition, 34, 415-444. doi:10.1017/S0272263112000137.

    Abstract

    The artificial language learning paradigm was used to investigate to what extent the use of prosodic features is universally applicable or specifically language driven in learning an unfamiliar language, and how nonnative prosodic patterns can be learned. Listeners of unrelated languages—Dutch (n = 100) and Korean (n = 100)—participated. The words to be learned varied with prosodic cues: no prosody, fundamental frequency (F0) rise in initial and final position, final lengthening, and final lengthening plus F0 rise. Both listener groups performed well above chance level with the final lengthening cue, confirming its crosslinguistic use. As for final F0 rise, however, Dutch listeners did not use it until the second exposure session, whereas Korean listeners used it at initial exposure. Neither group used initial F0 rise. On the basis of these results, F0 and durational cues appear to be universal in the sense that they are used across languages for their universally applicable auditory-perceptual saliency, but how they are used is language specific and constrains the use of available prosodic cues in processing a nonnative language. A discussion on how these findings bear on theories of second language (L2) speech perception and learning is provided.
  • Kinoshita, S., Schubert, T., & Verdonschot, R. G. (2019). Allograph priming is based on abstract letter identities: Evidence from Japanese kana. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(1), 183-190. doi:10.1037/xlm0000563.

    Abstract

    It is well-established that allographs like the uppercase and lowercase forms of the Roman alphabet (e.g., a and A) map onto the same "abstract letter identity," orthographic representations that are independent of the visual form. Consistent with this, in the allograph match task ("Are 'a' and 'A' the same letter?"), priming by a masked letter prime is equally robust for visually dissimilar prime-target pairs (e.g., d and D) and similar pairs (e.g., c and C). However, in principle this pattern of priming is also consistent with the possibility that allograph priming is purely phonological, based on the letter name. Because different allographic forms of the same letter, by definition, share a letter name, it is impossible to rule out this possibility a priori. In the present study, we investigated the influence of shared letter names by taking advantage of the fact that Japanese is written in two distinct writing systems, syllabic kana-that has two parallel forms, hiragana and katakana-and logographic kanji. Using the allograph match task, we tested whether a kanji prime with the same pronunciation as the target kana (e.g., both pronounced /i/) produces the same amount of priming as a kana prime in the opposite kana form (e.g.,). We found that the kana primes produced substantially greater priming than the phonologically identical kanji prime. which we take as evidence that allograph priming is based on abstract kana identity, not purely phonology.
  • Kinoshita, S., & Verdonschot, R. G. (2019). On recognizing Japanese katakana words: Explaining the reduced priming with hiragana and mixed-kana identity primes. Journal of Experimental Psychology: Human Perception and Performance, 45(11), 1513-1521. doi:10.1037/xhp0000692.

    Abstract

    The Japanese kana syllabary has 2 allographic forms, hiragana and katakana. As with other allographic variants like the uppercase and lowercase letters of the Roman alphabet, they show robust formindependent priming effects in the allograph match task (e.g., Kinoshita. Schubert. & Verdonschot, 2019). suggesting that they share abstract character-level representations. In direct contradiction, Perea. Nakayama, and Lupker (2017) argued that hiragana and katakana do not share character-level representations. based on their finding of reduced priming with identity prime containing a mix of hiragana and katakana (the mixed-kana prime) relative to the all-katakana identity prime in a lexical-decision task with loanword targets written in katakana. Here we sought to reconcile these seemingly contradictory claims, using mixed-kana. hiragana, and katakana primes in lexical decision. The mixed-kana prime and hiragana prime produced priming effects that are indistinguishable, and both were reduced in size relative to the priming effect produced by the katakana identity prime. Furthermore, this pattern was unchanged when the target was presented in hiragana. The findings are interpreted in terms of the assumption that the katakana format is specified in the orthographic representation of loanwords in Japanese readers. Implications of the account for the universality across writing systems is discussed.
  • Kirjavainen, M., Nikolaev, A., & Kidd, E. (2012). The effect of frequency and phonological neighbourhood density on the acquisition of past tense verbs by Finnish children. Cognitive Linguistics, 23(2), 273-315. doi:10.1515/cog-2012-0009.

    Abstract

    The acquisition of the past tense has received substantial attention in the psycholinguistics literature, yet most studies report data from English or closely related Indo-European languages. We report on a past tense elicitation study on 136 4–6-year-old children that were acquiring a highly inflected Finno-Ugric (Uralic) language—Finnish. The children were tested on real and novel verbs (N = 120) exhibiting (1) productive, (2) semi-productive, or (3) non-productive inflectional processes manipulated for frequency and phonological neighbourhood density (PND). We found that Finnish children are sensitive to lemma/base frequency and PND when processing inflected words, suggesting that even though children were using suffixation processes, they were also paying attention to the item level properties of the past tense verbs. This paper contributes to the growing body of research suggesting a single analogical/associative mechanism is sufficient in processing both productive (i.e., regular-like) and non-productive (i.e., irregular-like) words. We argue that seemingly rule-like elements in inflectional morphology are an emergent property of the lexicon.
  • Klassmann, A., Offenga, F., Broeder, D., & Skiba, R. (2006). IMDI metadata field usage at MPI. Language Archive Newsletter, no. 8, 6-8.
  • De Kleijn, R., Wijnen, M., & Poletiek, F. H. (2019). The effect of context-dependent information and sentence constructions on perceived humanness of an agent in a Turing test. Knowledge-Based Systems, 163, 794-799. doi:10.1016/j.knosys.2018.10.006.

    Abstract

    In a Turing test, a judge decides whether their conversation partner is either a machine or human. What cues does the judge use to determine this? In particular, are presumably unique features of human language actually perceived as humanlike? Participants rated the humanness of a set of sentences that were manipulated for grammatical construction: linear right-branching or hierarchical center-embedded and their plausibility with regard to world knowledge.

    We found that center-embedded sentences are perceived as less humanlike than right-branching sentences and more plausible sentences are regarded as more humanlike. However, the effect of plausibility of the sentence on perceived humanness is smaller for center-embedded sentences than for right-branching sentences.

    Participants also rated a conversation with either correct or incorrect use of the context by the agent. No effect of context use was found. Also, participants rated a full transcript of either a real human or a real chatbot, and we found that chatbots were reliably perceived as less humanlike than real humans, in line with our expectation. We did, however, find individual differences between chatbots and humans.

Share this page