Publications

Displaying 301 - 400 of 900
  • Hald, L. A., Steenbeek-Planting, E. G., & Hagoort, P. (2007). The interaction of discourse context and world knowledge in online sentence comprehension: Evidence from the N400. Brain Research, 1146, 210-218. doi:10.1016/j.brainres.2007.02.054.

    Abstract

    In an ERP experiment we investigated how the recruitment and integration of world knowledge information relate to the integration of information within a current discourse context. Participants were presented with short discourse contexts which were followed by a sentence that contained a critical word that was correct or incorrect based on general world knowledge and the supporting discourse context, or was more or less acceptable based on the combination of general world knowledge and the specific local discourse context. Relative to the critical word in the correct world knowledge sentences following a neutral discourse, all other critical words elicited an N400 effect that began at about 300 ms after word onset. However, the magnitude of the N400 effect varied in a way that suggests an interaction between world knowledge and discourse context. The results indicate that both world knowledge and discourse context have an effect on sentence interpretation, but neither overrides the other.
  • Haller, S., Klarhoefer, M., Schwarzbach, J., Radue, E. W., & Indefrey, P. (2007). Spatial and temporal analysis of fMRI data on word and sentence reading. European Journal of Neuroscience, 26(7), 2074-2084. doi:10.1111/j.1460-9568.2007.05816.x.

    Abstract

    Written language comprehension at the word and the sentence level was analysed by the combination of spatial and temporal analysis of functional magnetic resonance imaging (fMRI). Spatial analysis was performed via general linear modelling (GLM). Concerning the temporal analysis, local differences in neurovascular coupling may confound a direct comparison of blood oxygenation level-dependent (BOLD) response estimates between regions. To avoid this problem, we parametrically varied linguistic task demands and compared only task-induced within-region BOLD response differences across areas. We reasoned that, in a hierarchical processing system, increasing task demands at lower processing levels induce delayed onset of higher-level processes in corresponding areas. The flow of activation is thus reflected in the size of task-induced delay increases. We estimated BOLD response delay and duration for each voxel and each participant by fitting a model function to the event-related average BOLD response. The GLM showed increasing activations with increasing linguistic demands dominantly in the left inferior frontal gyrus (IFG) and the left superior temporal gyrus (STG). The combination of spatial and temporal analysis allowed a functional differentiation of IFG subregions involved in written language comprehension. Ventral IFG region (BA 47) and STG subserve earlier processing stages than two dorsal IFG regions (BA 44 and 45). This is in accordance with the assumed early lexical semantic and late syntactic processing of these regions and illustrates the complementary information provided by spatial and temporal fMRI data analysis of the same data set.
  • Hamilton, A., & Holler, J. (Eds.). (2023). Face2face: Advancing the science of social interaction [Special Issue]. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences. Retrieved from https://royalsocietypublishing.org/toc/rstb/2023/378/1875.

    Abstract

    Face to face interaction is fundamental to human sociality but is very complex to study in a scientific fashion. This theme issue brings together cutting-edge approaches to the study of face-to-face interaction and showcases how we can make progress in this area. Researchers are now studying interaction in adult conversation, parent-child relationships, neurodiverse groups, interactions with virtual agents and various animal species. The theme issue reveals how new paradigms are leading to more ecologically grounded and comprehensive insights into what social interaction is. Scientific advances in this area can lead to improvements in education and therapy, better understanding of neurodiversity and more engaging artificial agents
  • Hamilton, A., & Holler, J. (2023). Face2face: Advancing the science of social interaction. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210470. doi:10.1098/rstb.2021.0470.

    Abstract

    Face-to-face interaction is core to human sociality and its evolution, and provides the environment in which most of human communication occurs. Research into the full complexities that define face-to-face interaction requires a multi-disciplinary, multi-level approach, illuminating from different perspectives how we and other species interact. This special issue showcases a wide range of approaches, bringing together detailed studies of naturalistic social-interactional behaviour with larger scale analyses for generalization, and investigations of socially contextualized cognitive and neural processes that underpin the behaviour we observe. We suggest that this integrative approach will allow us to propel forwards the science of face-to-face interaction by leading us to new paradigms and novel, more ecologically grounded and comprehensive insights into how we interact with one another and with artificial agents, how differences in psychological profiles might affect interaction, and how the capacity to socially interact develops and has evolved in the human and other species. This theme issue makes a first step into this direction, with the aim to break down disciplinary boundaries and emphasizing the value of illuminating the many facets of face-to-face interaction.
  • Hamshere, M. L., Segurado, R., Moskvina, V., Nikolov, I., Glaser, B., & Holmans, P. A. (2007). Large-scale linkage analysis of 1302 affected relative pairs with rheumatoid arthritis. BMC Proceedings, 1 (Suppl 1), S100.

    Abstract

    Rheumatoid arthritis is the most common systematic autoimmune disease and its etiology is believed to have both strong genetic and environmental components. We demonstrate the utility of including genetic and clinical phenotypes as covariates within a linkage analysis framework to search for rheumatoid arthritis susceptibility loci. The raw genotypes of 1302 affected relative pairs were combined from four large family-based samples (North American Rheumatoid Arthritis Consortium, United Kingdom, European Consortium on Rheumatoid Arthritis Families, and Canada). The familiality of the clinical phenotypes was assessed. The affected relative pairs were subjected to autosomal multipoint affected relative-pair linkage analysis. Covariates were included in the linkage analysis to take account of heterogeneity within the sample. Evidence of familiality was observed with age at onset (p <} 0.001) and rheumatoid factor (RF) IgM (p {< 0.001), but not definite erosions (p = 0.21). Genome-wide significant evidence for linkage was observed on chromosome 6. Genome-wide suggestive evidence for linkage was observed on chromosomes 13 and 20 when conditioning on age at onset, chromosome 15 conditional on gender, and chromosome 19 conditional on RF IgM after allowing for multiple testing of covariates.
  • Harbusch, K., & Kempen, G. (2000). Complexity of linear order computation in Performance Grammar, TAG and HPSG. In Proceedings of Fifth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+5) (pp. 101-106).

    Abstract

    This paper investigates the time and space complexity of word order computation in the psycholinguistically motivated grammar formalism of Performance Grammar (PG). In PG, the first stage of syntax assembly yields an unordered tree ('mobile') consisting of a hierarchy of lexical frames (lexically anchored elementary trees). Associated with each lexica l frame is a linearizer—a Finite-State Automaton that locally computes the left-to-right order of the branches of the frame. Linearization takes place after the promotion component may have raised certain constituents (e.g. Wh- or focused phrases) into the domain of lexical frames higher up in the syntactic mobile. We show that the worst-case time and space complexity of analyzing input strings of length n is O(n5) and O(n4), respectively. This result compares favorably with the time complexity of word-order computations in Tree Adjoining Grammar (TAG). A comparison with Head-Driven Phrase Structure Grammar (HPSG) reveals that PG yields a more declarative linearization method, provided that the FSA is rewritten as an equivalent regular expression.
  • Harbusch, K., & Kempen, G. (2007). Clausal coordinate ellipsis in German: The TIGER treebank as a source of evidence. In J. Nivre, H. J. Kaalep, M. Kadri, & M. Koit (Eds.), Proceedings of the 16th Nordic Conference of Computational Linguistics (NODALIDA 2007) (pp. 81-88). Tartu: University of Tartu.

    Abstract

    Syntactic parsers and generators need highquality grammars of coordination and coordinate ellipsis—structures that occur very frequently but are much less well understood theoretically than many other domains of grammar. Modern grammars of coordinate ellipsis are based nearly exclusively on linguistic judgments (intuitions). The extent to which grammar rules based on this type of empirical evidence generate all and only the structures in text corpora, is unknown. As part of a project on the development of a grammar and a generator for coordinate ellipsis in German, we undertook an extensive exploration of the TIGER treebank—a syntactically annotated corpus of about 50,000 newspaper sentences. We report (1) frequency data for the various patterns of coordinate ellipsis, and (2) several rarely (but regularly) occurring ‘fringe deviations’ from the intuition-based rules for several ellipsis types. This information can help improve parser and generator performance.
  • Harbusch, K., Breugel, C., Koch, U., & Kempen, G. (2007). Interactive sentence combining and paraphrasing in support of integrated writing and grammar instruction: A new application area for natural language sentence generators. In S. Busemann (Ed.), Proceedings of the 11th Euopean Workshop in Natural Language Generation (ENLG07) (pp. 65-68). ACL Anthology.

    Abstract

    The potential of sentence generators as engines in Intelligent Computer-Assisted Language Learning and teaching (ICALL) software has hardly been explored. We sketch the prototype of COMPASS, a system that supports integrated writing and grammar curricula for 10 to 14 year old elementary or secondary schoolers. The system enables first- or second-language teachers to design controlled writing exercises, in particular of the “sentence combining” variety. The system includes facilities for error diagnosis and on-line feedback. Syntactic structures built by students or system can be displayed as easily understood phrase-structure or dependency trees, adapted to the student’s level of grammatical knowledge. The heart of the system is a specially designed generator capable of lexically guided sentence generation, of generating syntactic paraphrases, and displaying syntactic structures visually.
  • Harmon, Z., Barak, L., Shafto, P., Edwards, J., & Feldman, N. H. (2023). The competition-compensation account of developmental language disorder. Developmental Science, 26(4): e13364. doi:10.1111/desc.13364.

    Abstract

    Children with developmental language disorder (DLD) regularly use the bare form of verbs (e.g., dance) instead of inflected forms (e.g., danced). We propose an account of this behavior in which processing difficulties of children with DLD disproportionally affect processing novel inflected verbs in their input. Limited experience with inflection in novel contexts leads the inflection to face stronger competition from alternatives. Competition is resolved through a compensatory behavior that involves producing a more accessible alternative: in English, the bare form. We formalize this hypothesis within a probabilistic model that trades off context-dependent versus independent processing. Results show an over-reliance on preceding stem contexts when retrieving the inflection in a model that has difficulty with processing novel inflected forms. We further show that following the introduction of a bias to store and retrieve forms with preceding contexts, generalization in the typically developing (TD) models remains more or less stable, while the same bias in the DLD models exaggerates difficulties with generalization. Together, the results suggest that inconsistent use of inflectional morphemes by children with DLD could stem from inferences they make on the basis of data containing fewer novel inflected forms. Our account extends these findings to suggest that problems with detecting a form in novel contexts combined with a bias to rely on familiar contexts when retrieving a form could explain sequential planning difficulties in children with DLD.
  • Haun, D. B. M. (2007). Cognitive cladistics and the relativity of spatial cognition. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    This thesis elaborates on a methodological approach to reliably infer cognitive preferences in an extinct evolutionary ancestor of modern humans. In attempts to understand cognitive evolution, humans have been compared to capuchin monkeys, tamarins, and chimpanzees to name but a few. But comparisons between humans and one other, maybe even distantly related primate, as interesting as they might be, will not tell us anything about an evolutionary ancestor to humans. To put it bluntly: None of the living primates, not even chimpanzees, are a human ancestor. With that in mind, we can still use a comparative approach to gain information about our evolutionary ancestors, as long as we are careful about whom we compare with whom. If a certain trait exists in all genera of a phylogenetic clade, it was most likely present in their common ancestor. The great apes are such a clade (Pongo, Gorilla, Pan and Homo). It follows that, if members of all great ape genera shared a particular cognitive preference or ability, it is most likely part of the evolutionary inheritance of the clade at least ever since their last common ancestor, and therefore also an evolutionarily old, inherited cognitive default in humans. This thesis contains studies comparing all 4 extant Hominid genera, including humans of 4 different age-groups and 2 different cultures. Results show that all great apes do indeed share some cognitive preferences, which they most likely inherited from an evolutionary ancestor. Additionally, human cognitive preferences can change away from such an inherited predisposition given ontogenetic factors, and are at least in part variably adaptable to cultural circumstance.

    Additional information

    full text via Radboud Repository
  • Hayano, K. (2007). Repetitional agreement and anaphorical agreement: negotiation of affiliation and disaffiliation in Japanese conversation. Master Thesis, University of California, Los Angeles.
  • Heim, F., Fisher, S. E., Scharff, C., Ten Cate, C., & Riebel, K. (2023). Effects of cortical FoxP1 knockdowns on learned song preference in female zebra finches. eNeuro, 10(3): ENEURO.0328-22.2023. doi:10.1523/ENEURO.0328-22.2023.

    Abstract

    The search for molecular underpinnings of human vocal communication has focused on genes encoding forkhead-box transcription factors, as rare disruptions of FOXP1, FOXP2, and FOXP4 have been linked to disorders involving speech and language deficits. In male songbirds, an animal model for vocal learning, experimentally altered expression levels of these transcription factors impair song production learning. The relative contributions of auditory processing, motor function or auditory-motor integration to the deficits observed after different FoxP manipulations in songbirds are unknown. To examine the potential effects on auditory learning and development, we focused on female zebra finches (Taeniopygia guttata) that do not sing but develop song memories, which can be assayed in operant preference tests. We tested whether the relatively high levels of FoxP1 expression in forebrain areas implicated in female song preference learning are crucial for the development and/or maintenance of this behavior. Juvenile and adult female zebra finches received FoxP1 knockdowns targeted to HVC (proper name) or to the caudomedial mesopallium (CMM). Irrespective of target site and whether the knockdown took place before (juveniles) or after (adults) the sensitive phase for song memorization, all groups preferred their tutor’s song. However, adult females with FoxP1 knockdowns targeted at HVC showed weaker motivation to hear song and weaker song preferences than sham-treated controls, while no such differences were observed after knockdowns in CMM or in juveniles. In summary, FoxP1 knockdowns in the cortical song nucleus HVC were not associated with impaired tutor song memory but reduced motivation to actively request tutor songs.
  • Hellwig, B., Allen, S. E. M., Davidson, L., Defina, R., Kelly, B. F., & Kidd, E. (Eds.). (2023). The acquisition sketch project [Special Issue]. Language Documentation and Conservation Special Publication, 28.

    Abstract

    This special publication aims to build a renewed enthusiasm for collecting acquisition data across many languages, including those facing endangerment and loss. It presents a guide for documenting and describing child language and child-directed language in diverse languages and cultures, as well as a collection of acquisition sketches based on this guide. The guide is intended for anyone interested in working across child language and language documentation, including, for example, field linguists and language documenters, community language workers, child language researchers or graduate students.
  • Hellwig, B., Allen, S. E. M., Davidson, L., Defina, R., Kelly, B. F., & Kidd, E. (2023). Introduction: The acquisition sketch project. Language Documentation and Conservation Special Publication, 28, 1-3. Retrieved from https://hdl.handle.net/10125/74718.
  • Henke, L., Lewis, A. G., & Meyer, L. (2023). Fast and slow rhythms of naturalistic reading revealed by combined eye-tracking and electroencephalography. The Journal of Neuroscience, 43(24), 4461-4469. doi:10.1523/JNEUROSCI.1849-22.2023.

    Abstract

    Neural oscillations are thought to support speech and language processing. They may not only inherit acoustic rhythms, but might also impose endogenous rhythms onto processing. In support of this, we here report that human (both male and female) eye movements during naturalistic reading exhibit rhythmic patterns that show frequency-selective coherence with the EEG, in the absence of any stimulation rhythm. Periodicity was observed in two distinct frequency bands: First, word-locked saccades at 4-5 Hz display coherence with whole-head theta-band activity. Second, fixation durations fluctuate rhythmically at ∼1 Hz, in coherence with occipital delta-band activity. This latter effect was additionally phase-locked to sentence endings, suggesting a relationship with the formation of multi-word chunks. Together, eye movements during reading contain rhythmic patterns that occur in synchrony with oscillatory brain activity. This suggests that linguistic processing imposes preferred processing time scales onto reading, largely independent of actual physical rhythms in the stimulus.
  • Herbst, L. E. (2007). German 5-year-olds' intonational marking of information status. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1557-1560). Dudweiler: Pirrot.

    Abstract

    This paper reports on findings from an elicited production task with German 5-year-old children, investigating their use of intonation to mark information status of discourse referents. In line with findings for adults, new referents were preferably marked by H* and L+H*; textually given referents were mainly deaccented. Accessible referents (whose first mentions were less recent) were mostly accented, and predominantly also realised with H* and L+H*, showing children’s sensitivity to recency of mention. No evidence for the consistent use of a special ‘accessibility accent’ H+L* (as has been proposed for adult German) was found.
  • Hersh, T. A., Ravignani, A., & Burchardt, L. (2023). Robust rhythm reporting will advance ecological and evolutionary research. Methods in Ecology and Evolution, 14(6), 1398-1407. doi:10.1111/2041-210X.14118.

    Abstract


    Rhythmicity in the millisecond to second range is a fundamental building block of communication and coordinated movement. But how widespread are rhythmic capacities across species, and how did they evolve under different environmental pressures? Comparative research is necessary to answer these questions but has been hindered by limited crosstalk and comparability among results from different study species.
    Most acoustics studies do not explicitly focus on characterising or quantifying rhythm, but many are just a few scrapes away from contributing to and advancing the field of comparative rhythm research. Here, we present an eight-level rhythm reporting framework which details actionable steps researchers can take to report rhythm-relevant metrics. Levels fall into two categories: metric reporting and data sharing. Metric reporting levels include defining rhythm-relevant metrics, providing point estimates of temporal interval variability, reporting interval distributions, and conducting rhythm analyses. Data sharing levels are: sharing audio recordings, sharing interval durations, sharing sound element start and end times, and sharing audio recordings with sound element start/end times.
    Using sounds recorded from a sperm whale as a case study, we demonstrate how each reporting framework level can be implemented on real data. We also highlight existing best practice examples from recent research spanning multiple species. We clearly detail how engagement with our framework can be tailored case-by-case based on how much time and effort researchers are willing to contribute. Finally, we illustrate how reporting at any of the suggested levels will help advance comparative rhythm research.
    This framework will actively facilitate a comparative approach to acoustic rhythms while also promoting cooperation and data sustainability. By quantifying and reporting rhythm metrics more consistently and broadly, new avenues of inquiry and several long-standing, big picture research questions become more tractable. These lines of research can inform not only about the behavioural ecology of animals but also about the evolution of rhythm-relevant phenomena and the behavioural neuroscience of rhythm production and perception. Rhythm is clearly an emergent feature of life; adopting our framework, researchers from different fields and with different study species can help understand why.

    Additional information

    Research Data availability
  • Hintz, F., Khoe, Y. H., Strauß, A., Psomakas, A. J. A., & Holler, J. (2023). Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension. Cognitive, Affective and Behavioral Neuroscience, 23, 340-353. doi:10.3758/s13415-023-01074-8.

    Abstract

    In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words.
  • Hintz, F., Voeten, C. C., & Scharenborg, O. (2023). Recognizing non-native spoken words in background noise increases interference from the native language. Psychonomic Bulletin & Review, 30, 1549-1563. doi:10.3758/s13423-022-02233-7.

    Abstract

    Listeners frequently recognize spoken words in the presence of background noise. Previous research has shown that noise reduces phoneme intelligibility and hampers spoken-word recognition—especially for non-native listeners. In the present study, we investigated how noise influences lexical competition in both the non-native and the native language, reflecting the degree to which both languages are co-activated. We recorded the eye movements of native Dutch participants as they listened to English sentences containing a target word while looking at displays containing four objects. On target-present trials, the visual referent depicting the target word was present, along with three unrelated distractors. On target-absent trials, the target object (e.g., wizard) was absent. Instead, the display contained an English competitor, overlapping with the English target in phonological onset (e.g., window), a Dutch competitor, overlapping with the English target in phonological onset (e.g., wimpel, pennant), and two unrelated distractors. Half of the sentences was masked by speech-shaped noise; the other half was presented in quiet. Compared to speech in quiet, noise delayed fixations to the target objects on target-present trials. For target-absent trials, we observed that the likelihood for fixation biases towards the English and Dutch onset competitors (over the unrelated distractors) was larger in noise than in quiet. Our data thus show that the presence of background noise increases lexical competition in the task-relevant non-native (English) and in the task-irrelevant native (Dutch) language. The latter reflects stronger interference of one’s native language during non-native spoken-word recognition under adverse conditions.

    Additional information

    table 2 target-absent items
  • Holler, J., & Geoffrey, B. (2007). Gesture use in social interaction: how speakers' gestures can reflect listeners' thinking. In L. Mondada (Ed.), On-line Proceedings of the 2nd Conference of the International Society of Gesture Studies, Lyon, France 15-18 June 2005.
  • Holler, J., & Stevens, R. (2007). The effect of common ground on how speakers use gesture and speech to represent size information. Journal of Language and Social Psychology, 26, 4-27.
  • Hoogman, M., Weisfelt, M., van de Beek, D., de Gans, J., & Schmand, B. (2007). Cognitive outcome in adults after bacterial meningitis. Journal of Neurology, Neurosurgery & Psychiatry, 78, 1092-1096. doi:10.1136/jnnp.2006.110023.

    Abstract

    Objective: To evaluate cognitive outcome in adult survivors of bacterial meningitis. Methods: Data from three prospective multicentre studies were pooled and reanalysed, involving 155 adults surviving bacterial meningitis (79 after pneumococcal and 76 after meningococcal meningitis) and 72 healthy controls. Results: Cognitive impairment was found in 32% of patients and this proportion was similar for survivors of pneumococcal and meningococcal meningitis. Survivors of pneumococcal meningitis performed worse on memory tasks (p<0.001) and tended to be cognitively slower than survivors of meningococcal meningitis (p = 0.08). We found a diffuse pattern of cognitive impairment in which cognitive speed played the most important role. Cognitive performance was not related to time since meningitis; however, there was a positive association between time since meningitis and self-reported physical impairment (p<0.01). The frequency of cognitive impairment and the numbers of abnormal test results for patients with and without adjunctive dexamethasone were similar. Conclusions: Adult survivors of bacterial meningitis are at risk of cognitive impairment, which consists mainly of cognitive slowness. The loss of cognitive speed is stable over time after bacterial meningitis; however, there is a significant improvement in subjective physical impairment in the years after bacterial meningitis. The use of dexamethasone was not associated with cognitive impairment.
  • De Hoop, H., Levshina, N., & Segers, M. (2023). The effect of the use of T and V pronouns in Dutch HR communication. Journal of Pragmatics, 203, 96-109. doi:10.1016/j.pragma.2022.11.017.

    Abstract

    In an online experiment among native speakers of Dutch we measured addressees' responses to emails written in the informal pronoun T or the formal pronoun V in HR communication. 172 participants (61 male, mean age 37 years) read either the V-versions or the T-versions of two invitation emails and two rejection emails by four different fictitious recruiters. After each email, participants had to score their appreciation of the company and the recruiter on five different scales each, such as The recruiter who wrote this email seems … [scale from friendly to unfriendly]. We hypothesized that (i) the V-pronoun would be more appreciated in letters of rejection, and the T-pronoun in letters of invitation, and (ii) older people would appreciate the V-pronoun more than the T-pronoun, and the other way around for younger people. Although neither of these hypotheses was supported, we did find a small effect of pronoun: Emails written in V were more highly appreciated than emails in T, irrespective of type of email (invitation or rejection), and irrespective of the participant's age, gender, and level of education. At the same time, we observed differences in the strength of this effect across different scales.
  • Horton, S., Jackson, V., Boyce, J., Franken, M.-C., Siemers, S., St John, M., Hearps, S., Van Reyk, O., Braden, R., Parker, R., Vogel, A. P., Eising, E., Amor, D. J., Irvine, J., Fisher, S. E., Martin, N. G., Reilly, S., Bahlo, M., Scheffer, I., & Morgan, A. (2023). Self-reported stuttering severity is accurate: Informing methods for large-scale data collection in stuttering. Journal of Speech, Language, and Hearing Research. Advance online publication. doi:10.1044/2023_JSLHR-23-00081.

    Abstract

    Purpose:
    To our knowledge, there are no data examining the agreement between self-reported and clinician-rated stuttering severity. In the era of big data, self-reported ratings have great potential utility for large-scale data collection, where cost and time preclude in-depth assessment by a clinician. Equally, there is increasing emphasis on the need to recognize an individual's experience of their own condition. Here, we examined the agreement between self-reported stuttering severity compared to clinician ratings during a speech assessment. As a secondary objective, we determined whether self-reported stuttering severity correlated with an individual's subjective impact of stuttering.

    Method:
    Speech-language pathologists conducted face-to-face speech assessments with 195 participants (137 males) aged 5–84 years, recruited from a cohort of people with self-reported stuttering. Stuttering severity was rated on a 10-point scale by the participant and by two speech-language pathologists. Participants also completed the Overall Assessment of the Subjective Experience of Stuttering (OASES). Clinician and participant ratings were compared. The association between stuttering severity and the OASES scores was examined.

    Results:
    There was a strong positive correlation between speech-language pathologist and participant-reported ratings of stuttering severity. Participant-reported stuttering severity correlated weakly with the four OASES domains and with the OASES overall impact score.

    Conclusions:
    Participants were able to accurately rate their stuttering severity during a speech assessment using a simple one-item question. This finding indicates that self-report stuttering severity is a suitable method for large-scale data collection. Findings also support the collection of self-report subjective experience data using questionnaires, such as the OASES, which add vital information about the participants' experience of stuttering that is not captured by overt speech severity ratings alone.
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • Huettig, F., & McQueen, J. M. (2007). The tug of war between phonological, semantic and shape information in language-mediated visual search. Journal of Memory and Language, 57(4), 460-482. doi:10.1016/j.jml.2007.02.001.

    Abstract

    Experiments 1 and 2 examined the time-course of retrieval of phonological, visual-shape and semantic knowledge as Dutch participants listened to sentences and looked at displays of four pictures. Given a sentence with beker, `beaker', for example, the display contained phonological (a beaver, bever), shape (a bobbin, klos), and semantic (a fork, vork) competitors. When the display appeared at sentence onset, fixations to phonological competitors preceded fixations to shape and semantic competitors. When display onset was 200 ms before (e.g.) beker, fixations were directed to shape and then semantic competitors, but not phonological competitors. In Experiments 3 and 4, displays contained the printed names of the previously-pictured entities; only phonological competitors were fixated preferentially. These findings suggest that retrieval of phonological, shape and semantic knowledge in the spoken-word and picture-recognition systems is cascaded, and that visual attention shifts are co-determined by the time-course of retrieval of all three knowledge types and by the nature of the information in the visual environment.
  • Huettig, F., & Altmann, G. T. M. (2007). Visual-shape competition during language-mediated attention is based on lexical input and not modulated by contextual appropriateness. Visual Cognition, 15(8), 985-1018. doi:10.1080/13506280601130875.

    Abstract

    Visual attention can be directed immediately, as a spoken word unfolds, towards conceptually related but nonassociated objects, even if they mismatch on other dimensions that would normally determine which objects in the scene were appropriate referents for the unfolding word (Huettig & Altmann, 2005). Here we demonstrate that the mapping between language and concurrent visual objects can also be mediated by visual-shape relations. On hearing "snake", participants directed overt attention immediately, within a visual display depicting four objects, to a picture of an electric cable, although participants had viewed the visual display with four objects for approximately 5 s before hearing the target word - sufficient time to recognize the objects for what they were. The time spent fixating the cable correlated significantly with ratings of the visual similarity between snakes in general and this particular cable. Importantly, with sentences contextually biased towards the concept snake, participants looked at the snake well before the onset of "snake", but they did not look at the visually similar cable until hearing "snake". Finally, we demonstrate that such activation can, under certain circumstances (e.g., during the processing of dominant meanings of homonyms), constrain the direction of visual attention even when it is clearly contextually inappropriate. We conclude that language-mediated attention can be guided by a visual match between spoken words and visual objects, but that such a match is based on lexical input and may not be modulated by contextual appropriateness.
  • Huettig, F., Voeten, C. C., Pascual, E., Liang, J., & Hintz, F. (2023). Do autistic children differ in language-mediated prediction? Cognition, 239: 105571. doi:10.1016/j.cognition.2023.105571.

    Abstract

    Prediction appears to be an important characteristic of the human mind. It has also been suggested that prediction is a core difference of autistic children. Past research exploring language-mediated anticipatory eye movements in autistic children, however, has been somewhat contradictory, with some studies finding normal anticipatory processing in autistic children with low levels of autistic traits but others observing weaker prediction effects in autistic children with less receptive language skills. Here we investigated language-mediated anticipatory eye movements in young children who differed in the severity of their level of autistic traits and were in professional institutional care in Hangzhou, China. We chose the same spoken sentences (translated into Mandarin Chinese) and visual stimuli as a previous study which observed robust prediction effects in young children (Mani & Huettig, 2012) and included a control group of typically-developing children. Typically developing but not autistic children showed robust prediction effects. Most interestingly, autistic children with lower communication, motor, and (adaptive) behavior scores exhibited both less predictive and non-predictive visual attention behavior. Our results raise the possibility that differences in language-mediated anticipatory eye movements in autistic children with higher levels of autistic traits may be differences in visual attention in disguise, a hypothesis that needs further investigation.
  • Huettig, F., & Ferreira, F. (2023). The myth of normal reading. Perspectives on Psychological Science, 18(4), 863-870. doi:10.1177/17456916221127226.

    Abstract

    We argue that the educational and psychological sciences must embrace the diversity of reading rather than chase the phantom of normal reading behavior. We critically discuss the research practice of asking participants in experiments to read “normally”. We then draw attention to the large cross-cultural and linguistic diversity around the world and consider the enormous diversity of reading situations and goals. Finally, we observe that people bring a huge diversity of brains and experiences to the reading task. This leads to certain implications. First, there are important lessons for how to conduct psycholinguistic experiments. Second, we need to move beyond Anglo-centric reading research and produce models of reading that reflect the large cross-cultural diversity of languages and types of writing systems. Third, we must acknowledge that there are multiple ways of reading and reasons for reading, and none of them is normal or better or a “gold standard”. Finally, we must stop stigmatizing individuals who read differently and for different reasons, and there should be increased focus on teaching the ability to extract information relevant to the person’s goals. What is important is not how well people decode written language and how fast people read but what people comprehend given their own stated goals.
  • Huisman, J. L. A., Van Hout, R., & Majid, A. (2023). Cross-linguistic constraints and lineage-specific developments in the semantics of cutting and breaking in Japonic and Germanic. Linguistic Typology, 27(1), 41-75. doi:10.1515/lingty-2021-2090.

    Abstract

    Semantic variation in the cutting and breaking domain has been shown to be constrained across languages in a previous typological study, but it was unclear whether Japanese was an outlier in this domain. Here we revisit cutting and breaking in the Japonic language area by collecting new naming data for 40 videoclips depicting cutting and breaking events in Standard Japanese, the highly divergent Tohoku dialects, as well as four related Ryukyuan languages (Amami, Okinawa, Miyako and Yaeyama). We find that the Japonic languages recapitulate the same semantic dimensions attested in the previous typological study, confirming that semantic variation in the domain of cutting and breaking is indeed cross-linguistically constrained. We then compare our new Japonic data to previously collected Germanic data and find that, in general, related languages resemble each other more than unrelated languages, and that the Japonic languages resemble each other more than the Germanic languages do. Nevertheless, English resembles all of the Japonic languages more than it resembles Swedish. Together, these findings show that the rate and extent of semantic change can differ between language families, indicating the existence of lineage-specific developments on top of universal cross-linguistic constraints.
  • Huizeling, E., Alday, P. M., Peeters, D., & Hagoort, P. (2023). Combining EEG and 3D-eye-tracking to study the prediction of upcoming speech in naturalistic virtual environments: A proof of principle. Neuropsychologia, 191: 108730. doi:10.1016/j.neuropsychologia.2023.108730.

    Abstract

    EEG and eye-tracking provide complementary information when investigating language comprehension. Evidence that speech processing may be facilitated by speech prediction comes from the observation that a listener's eye gaze moves towards a referent before it is mentioned if the remainder of the spoken sentence is predictable. However, changes to the trajectory of anticipatory fixations could result from a change in prediction or an attention shift. Conversely, N400 amplitudes and concurrent spectral power provide information about the ease of word processing the moment the word is perceived. In a proof-of-principle investigation, we combined EEG and eye-tracking to study linguistic prediction in naturalistic, virtual environments. We observed increased processing, reflected in theta band power, either during verb processing - when the verb was predictive of the noun - or during noun processing - when the verb was not predictive of the noun. Alpha power was higher in response to the predictive verb and unpredictable nouns. We replicated typical effects of noun congruence but not predictability on the N400 in response to the noun. Thus, the rich visual context that accompanied speech in virtual reality influenced language processing compared to previous reports, where the visual context may have facilitated processing of unpredictable nouns. Finally, anticipatory fixations were predictive of spectral power during noun processing and the length of time fixating the target could be predicted by spectral power at verb onset, conditional on the object having been fixated. Overall, we show that combining EEG and eye-tracking provides a promising new method to answer novel research questions about the prediction of upcoming linguistic input, for example, regarding the role of extralinguistic cues in prediction during language comprehension.
  • Hunley, K., Dunn, M., Lindström, E., Reesink, G., Terrill, A., Norton, H., Scheinfeldt, L., Friedlaender, F. R., Merriwether, D. A., Koki, G., & Friedlaender, J. S. (2007). Inferring prehistory from genetic, linguistic, and geographic variation. In J. S. Friedlaender (Ed.), Genes, language, & culture history in the Southwest Pacific (pp. 141-154). Oxford: Oxford University Press.

    Abstract

    This chapter investigates the fit of genetic, phenotypic, and linguistic data to two well-known models of population history. The first of these models, termed the population fissions model, emphasizes population splitting, isolation, and independent evolution. It predicts that genetic and linguistic data will be perfectly tree-like. The second model, termed isolation by distance, emphasizes genetic exchange among geographically proximate populations. It predicts a monotonic decline in genetic similarity with increasing geographic distance. While these models are overly simplistic, deviations from them were expected to provide important insights into the population history of northern Island Melanesia. The chapter finds scant support for either model because the prehistory of the region has been so complex. Nonetheless, the genetic and linguistic data are consistent with an early radiation of proto-Papuan speakers into the region followed by a much later migration of Austronesian speaking peoples. While these groups subsequently experienced substantial genetic and cultural exchange, this exchange has been insufficient to erase this history of separate migrations.
  • Hustá, C., Nieuwland, M. S., & Meyer, A. S. (2023). Effects of picture naming and categorization on concurrent comprehension: Evidence from the N400. Collabra: Psychology, 9(1): 88129. doi:10.1525/collabra.88129.

    Abstract

    n conversations, interlocutors concurrently perform two related processes: speech comprehension and speech planning. We investigated effects of speech planning on comprehension using EEG. Dutch speakers listened to sentences that ended with expected or unexpected target words. In addition, a picture was presented two seconds after target onset (Experiment 1) or 50 ms before target onset (Experiment 2). Participants’ task was to name the picture or to stay quiet depending on the picture category. In Experiment 1, we found a strong N400 effect in response to unexpected compared to expected target words. Importantly, this N400 effect was reduced in Experiment 2 compared to Experiment 1. Unexpectedly, the N400 effect was not smaller in the naming compared to categorization condition. This indicates that conceptual preparation or the decision whether to speak (taking place in both task conditions of Experiment 2) rather than processes specific to word planning interfere with comprehension.
  • Huttar, G. L., Essegbey, J., & Ameka, F. K. (2007). Gbe and other West African sources of Suriname creole semantic structures: Implications for creole genesis. Journal of Pidgin and Creole Languages, 22(1), 57-72. doi:10.1075/jpcl.22.1.05hut.

    Abstract

    This paper reports on ongoing research on the role of various kinds of potential substrate languages in the development of the semantic structures of Ndyuka (Eastern Suriname Creole). A set of 100 senses of noun, verb, and other lexemes in Ndyuka were compared with senses of corresponding lexemes in three kinds of languages of the former Slave Coast and Gold Coast areas, and immediately adjoining hinterland: (a) Gbe languages; (b) other Kwa languages, specifically Akan and Ga; (c) non-Kwa Niger-Congo languages. The results of this process provide some evidence for the importance of the Gbe languages in the formation of the Suriname creoles, but also for the importance of other languages, and for the areal nature of some of the collocations studied, rendering specific identification of a single substrate source impossible and inappropriate. These results not only provide information about the role of Gbe and other languages in the formation of Ndyuka, but also give evidence for effects of substrate languages spoken by late arrivals some time after the "founders" of a given creole-speaking society. The conclusions are extrapolated beyond Suriname to creole genesis generally.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P. (2007). Brain imaging studies of language production. In G. Gaskell (Ed.), Oxford handbook of psycholinguistics (pp. 547-564). Oxford: Oxford University Press.

    Abstract

    Neurocognitive studies of language production have provided sufficient evidence on both the spatial and the temporal patterns of brain activation to allow tentative and in some cases not so tentative conclusions about function-structure relationships. This chapter reports meta-analysis results that identify reliable activation areas for a range of word, sentence, and narrative production tasks both in the native language and a second language. Based on a theoretically motivated analysis of language production tasks it is possible to specify relationships between brain areas and functional processing components of language production that could not have been derived from the data provided by any single task.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Indefrey, P., & Levelt, W. J. M. (2000). The neural correlates of language production. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences; 2nd ed. (pp. 845-865). Cambridge, MA: MIT Press.

    Abstract

    This chapter reviews the findings of 58 word production experiments using different tasks and neuroimaging techniques. The reported cerebral activation sites are coded in a common anatomic reference system. Based on a functional model of language production, the different word production tasks are analyzed in terms of their processing components. This approach allows a distinction between the core process of word production and preceding task-specific processes (lead-in processes) such as visual or auditory stimulus recognition. The core process of word production is subserved by a left-lateralized perisylvian/thalamic language production network. Within this network there seems to be functional specialization for the processing stages of word production. In addition, this chapter includes a discussion of the available evidence on syntactic production, self-monitoring, and the time course of word production.
  • Ingvar, M., & Petersson, K. M. (2000). Functional maps and brain networks. In A. W. Toga (Ed.), Brain mapping: The systems (pp. 111-140). San Diego: Academic Press.
  • Isaac, A., Zinn, C., Matthezing, H., Van de Meij, H., Schlobach, S., & Wang, S. (2007). The value of usage scenarios for thesaurus alignment in cultural heritage context. In Proceedings of the ISWC 2007 workshop in cultural heritage on the semantic web.

    Abstract

    Thesaurus alignment is important for efficient access to heterogeneous Cultural Heritage data. Current ontology alignment techniques provide solutions, but with limited value in practice, because the requirements from usage scenarios are rarely taken in account. In this paper, we start from particular requirements for book re-indexing and investigate possible ways of developing, deploying and evaluating thesaurus alignment techniques in this context. We then compare different aspects of this scenario with others from a more general perspective.
  • Jadoul, Y., & Ravignani, A. (2023). Modelling the emergence of synchrony from decentralized rhythmic interactions in animal communication. Proceedings of the Royal Society B: Biological Sciences, 290(2003). doi:10.1098/rspb.2023.0876.

    Abstract

    To communicate, an animal's strategic timing of rhythmic signals is crucial. Evolutionary, game-theoretical, and dynamical systems models can shed light on the interaction between individuals and the associated costs and benefits of signalling at a specific time. Mathematical models that study rhythmic interactions from a strategic or evolutionary perspective are rare in animal communication research. But new inspiration may come from a recent game theory model of how group synchrony emerges from local interactions of oscillatory neurons. In the study, the authors analyse when the benefit of joint synchronization outweighs the cost of individual neurons sending electrical signals to each other. They postulate there is a benefit for pairs of neurons to fire together and a cost for a neuron to communicate. The resulting model delivers a variant of a classical dynamical system, the Kuramoto model. Here, we present an accessible overview of the Kuramoto model and evolutionary game theory, and of the 'oscillatory neurons' model. We interpret the model's results and discuss the advantages and limitations of using this particular model in the context of animal rhythmic communication. Finally, we sketch potential future directions and discuss the need to further combine evolutionary dynamics, game theory and rhythmic processes in animal communication studies.
  • Jadoul, Y., Düngen, D., & Ravignani, A. (2023). PyGellermann: a Python tool to generate pseudorandom series for human and non-human animal behavioural experiments. BMC Research Notes, 16: 135. doi:10.1186/s13104-023-06396-x.

    Abstract

    Objective

    Researchers in animal cognition, psychophysics, and experimental psychology need to randomise the presentation order of trials in experimental sessions. In many paradigms, for each trial, one of two responses can be correct, and the trials need to be ordered such that the participant’s responses are a fair assessment of their performance. Specifically, in some cases, especially for low numbers of trials, randomised trial orders need to be excluded if they contain simple patterns which a participant could accidentally match and so succeed at the task without learning.
    Results

    We present and distribute a simple Python software package and tool to produce pseudorandom sequences following the Gellermann series. This series has been proposed to pre-empt simple heuristics and avoid inflated performance rates via false positive responses. Our tool allows users to choose the sequence length and outputs a .csv file with newly and randomly generated sequences. This allows behavioural researchers to produce, in a few seconds, a pseudorandom sequence for their specific experiment. PyGellermann is available at https://github.com/YannickJadoul/PyGellermann.
  • Jadoul, Y., Düngen, D., & Ravignani, A. (2023). Live-tracking acoustic parameters in animal behavioural experiments: Interactive bioacoustics with parselmouth. In A. Astolfi, F. Asdrubali, & L. Shtrepi (Eds.), Proceedings of the 10th Convention of the European Acoustics Association Forum Acusticum 2023 (pp. 4675-4678). Torino: European Acoustics Association.

    Abstract

    Most bioacoustics software is used to analyse the already collected acoustics data in batch, i.e., after the data-collecting phase of a scientific study. However, experiments based on animal training require immediate and precise reactions from the experimenter, and thus do not easily dovetail with a typical bioacoustics workflow. Bridging this methodological gap, we have developed a custom application to live-monitor the vocal development of harbour seals in a behavioural experiment. In each trial, the application records and automatically detects an animal's call, and immediately measures duration and acoustic measures such as intensity, fundamental frequency, or formant frequencies. It then displays a spectrogram of the recording and the acoustic measurements, allowing the experimenter to instantly evaluate whether or not to reinforce the animal's vocalisation. From a technical perspective, the rapid and easy development of this custom software was made possible by combining multiple open-source software projects. Here, we integrated the acoustic analyses from Parselmouth, a Python library for Praat, together with PyAudio and Matplotlib's recording and plotting functionality, into a custom graphical user interface created with PyQt. This flexible recombination of different open-source Python libraries allows the whole program to be written in a mere couple of hundred lines of code
  • Jago, L. S., Alcock, K., Meints, K., Pine, J. M., & Rowland, C. F. (2023). Language outcomes from the UK-CDI Project: Can risk factors, vocabulary skills and gesture scores in infancy predict later language disorders or concern for language development? Frontiers in Psychology, 14: 1167810. doi:10.3389/fpsyg.2023.1167810.

    Abstract

    At the group level, children exposed to certain health and demographic risk factors, and who have delayed language in early childhood are, more likely to have language problems later in childhood. However, it is unclear whether we can use these risk factors to predict whether an individual child is likely to develop problems with language (e.g., be diagnosed with a developmental language disorder). We tested this in a sample of 146 children who took part in the UK-CDI norming project. When the children were 15–18 months old, 1,210 British parents completed: (a) the UK-CDI (a detailed assessment of vocabulary and gesture use) and (b) the Family Questionnaire (questions about health and demographic risk factors). When the children were between 4 and 6  years, 146 of the same parents completed a short questionnaire that assessed (a) whether children had been diagnosed with a disability that was likely to affect language proficiency (e.g., developmental disability, language disorder, hearing impairment), but (b) also yielded a broader measure: whether the child’s language had raised any concern, either by a parent or professional. Discriminant function analyses were used to assess whether we could use different combinations of 10 risk factors, together with early vocabulary and gesture scores, to identify children (a) who had developed a language-related disability by the age of 4–6 years (20 children, 13.70% of the sample) or (b) for whom concern about language had been expressed (49 children; 33.56%). The overall accuracy of the models, and the specificity scores were high, indicating that the measures correctly identified those children without a language-related disability and whose language was not of concern. However, sensitivity scores were low, indicating that the models could not identify those children who were diagnosed with a language-related disability or whose language was of concern. Several exploratory analyses were carried out to analyse these results further. Overall, the results suggest that it is difficult to use parent reports of early risk factors and language in the first 2 years of life to predict which children are likely to be diagnosed with a language-related disability. Possible reasons for this are discussed.

    Additional information

    follow up questionnaire table S1
  • Janse, E., Nooteboom, S. G., & Quené, H. (2007). Coping with gradient forms of /t/-deletion and lexical ambiguity in spoken word recognition. Language and Cognitive Processes, 22(2), 161-200. doi:10.1080/01690960500371024.

    Abstract

    This study investigates how listeners cope with gradient forms of deletion of word-final /t/ when recognising words in a phonological context that makes /t/-deletion viable. A corpus study confirmed a high incidence of /t/-deletion in an /st#b/ context in Dutch. A discrimination study showed that differences between released /t/, unreleased /t/ and fully deleted /t/ in this specific /st#b/ context were salient. Two on-line experiments were carried out to investigate whether lexical activation might be affected by this form variation. Even though unreleased and released variants were processed equally fast by listeners, a detailed analysis of the unreleased condition provided evidence for gradient activation. Activating a target ending in /t/ is slowest for the most reduced variant because phonological context has to be taken into account. Importantly, activation for a target with /t/ in the absence of cues for /t/ is reduced if there is a surface-matching lexical competitor.
  • Janse, E., Van der Werff, M., & Quené, H. (2007). Listening to fast speech: Aging and sentence context. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 681-684). Dudweiler: Pirrot.

    Abstract

    In this study we investigated to what extent a meaningful sentence context facilitates spoken word processing in young and older listeners if listening is made taxing by time-compressing the speech. Even though elderly listeners have been shown to benefit more from sentence context in difficult listening conditions than young listeners, time compression of speech may interfere with semantic comprehension, particularly in older listeners because of cognitive slowing. The results of a target detection experiment showed that, unlike young listeners who showed facilitation by context at both rates, elderly listeners showed context facilitation at the intermediate, but not at the fastest rate. This suggests that semantic interpretation lags behind target identification.
  • Janse, E., Sennema, A., & Slis, A. (2000). Fast speech timing in Dutch: The durational correlates of lexical stress and pitch accent. In Proceedings of the VIth International Conference on Spoken Language Processing, Vol. III (pp. 251-254).

    Abstract

    n this study we investigated the durational correlates of lexical stress and pitch accent at normal and fast speech rate in Dutch. Previous literature on English shows that durations of lexically unstressed vowels are reduced more than stressed vowels when speakers increase their speech rate. We found that the same holds for Dutch, irrespective of whether the unstressed vowel is schwa or a "full" vowel. In the same line, we expected that vowels in words without a pitch accent would be shortened relatively more than vowels in words with a pitch accent. This was not the case: if anything, the accented vowels were shortened relatively more than the unaccented vowels. We conclude that duration is an important cue for lexical stress, but not for pitch accent.
  • Janse, E. (2000). Intelligibility of time-compressed speech: Three ways of time-compression. In Proceedings of the VIth International Conference on Spoken Language Processing, vol. III (pp. 786-789).

    Abstract

    Studies on fast speech have shown that word-level timing of fast speech differs from that of normal rate speech in that unstressed syllables are shortened more than stressed syllables as speech rate increases. An earlier experiment showed that the intelligibility of time-compressed speech could not be improved by making its temporal organisation closer to natural fast speech. To test the hypothesis that segmental intelligibility is more important than prosodic timing in listening to timecompressed speech, the intelligibility of bisyllabic words was tested in three time-compression conditions: either stressed and unstressed syllable were compressed to the same degree, or the stressed syllable was compressed more than the unstressed syllable, or the reverse. As was found before, imitating wordlevel timing of fast speech did not improve intelligibility over linear compression. However, the results did not confirm the hypothesis either: there was no difference in intelligibility between the three compression conditions. We conclude that segmental intelligibility plays an important role, but further research is necessary to decide between the contributions of prosody and segmental intelligibility to the word-level intelligibility of time-compressed speech.
  • Janzen, G., Wagensveld, B., & Van Turennout, M. (2007). Neural representation of navigational relevance is rapidly induced and long lasting. Cerebral Cortex, 17(4), 975-981. doi:10.1093/cercor/bhl008.

    Abstract

    Successful navigation is facilitated by the presence of landmarks. Previous functional magnetic resonance imaging (fMRI) evidence indicated that the human parahippocampal gyrus automatically distinguishes between landmarks placed at navigationally relevant (decision points) and irrelevant locations (nondecision points). This storage of navigational relevance can provide a neural mechanism underlying successful navigation. However, an efficient wayfinding mechanism requires that important spatial information is learned quickly and maintained over time. The present study investigates whether the representation of navigational relevance is modulated by time and practice. Participants learned 2 film sequences through virtual mazes containing objects at decision and at nondecision points. One maze was shown one time, and the other maze was shown 3 times. Twenty-four hours after study, event-related fMRI data were acquired during recognition of the objects. The results showed that activity in the parahippocampal gyrus was increased for objects previously placed at decision points as compared with objects placed at nondecision points. The decision point effect was not modulated by the number of exposures to the mazes and independent of explicit memory functions. These findings suggest a persistent representation of navigationally relevant information, which is stable after only one exposure to an environment. These rapidly induced and long-lasting changes in object representation provide a basis for successful wayfinding.
  • Janzen, G., & Weststeijn, C. G. (2007). Neural representation of object location and route direction: An event-related fMRI study. Brain Research, 1165, 116-125. doi:10.1016/j.brainres.2007.05.074.

    Abstract

    The human brain distinguishes between landmarks placed at navigationally relevant and irrelevant locations. However, to provide a successful wayfinding mechanism not only landmarks but also the routes between them need to be stored. We examined the neural representation of a memory for route direction and a memory for relevant landmarks. Healthy human adults viewed objects along a route through a virtual maze. Event-related functional magnetic resonance imaging (fMRI) data were acquired during a subsequent subliminal priming recognition task. Prime-objects either preceded or succeeded a target-object on a preciously learned route. Our results provide evidence that the parahippocampal gyri distinguish between relevant and irrelevant landmarks whereas the inferior parietal gyrus, the anterior cingulate gyrus as well as the right caudate nucleus are involved in the coding of route direction. These data show that separated memory systems store different spatial information. A memory for navigationally relevant object information and a memory for route direction exist.
  • Janzen, G., Herrmann, T., Katz, S., & Schweizer, K. (2000). Oblique Angled Intersections and Barriers: Navigating through a Virtual Maze. In Spatial Cognition II (pp. 277-294). Berlin: Springer.

    Abstract

    The configuration of a spatial layout has a substantial effect on the acquisition and the representation of the environment. In four experiments, we investigated navigation difficulties arising at oblique angled intersections. In the first three studies we investigated specific arrow-fork configurations. In dependence on the branch subjects use to enter the intersection different decision latencies and numbers of errors arise. If subjects see the intersection as a fork, it is more difficult to find the correct way as if it is seen as an arrow. In a fourth study we investigated different heuristics people use while making a detour around a barrier. Detour behaviour varies with the perspective. If subjects learn and navigate through the maze in a field perspective they use a heuristic of preferring right angled paths. If they have a view from above and acquire their knowledge in an observer perspective they use oblique angled paths more often.

    Files private

    Request files
  • Jesse, A., & McQueen, J. M. (2007). Prelexical adjustments to speaker idiosyncracies: Are they position-specific? In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1597-1600). Adelaide: Causal Productions.

    Abstract

    Listeners use lexical knowledge to adjust their prelexical representations of speech sounds in response to the idiosyncratic pronunciations of particular speakers. We used an exposure-test paradigm to investigate whether this type of perceptual learning transfers across syllabic positions. No significant learning effect was found in Experiment 1, where exposure sounds were onsets and test sounds were codas. Experiments 2-4 showed that there was no learning even when both exposure and test sounds were onsets. But a trend was found when exposure sounds were codas and test sounds were onsets (Experiment 5). This trend was smaller than the robust effect previously found for the coda-to-coda case. These findings suggest that knowledge about idiosyncratic pronunciations may be position specific: Knowledge about how a speaker produces sounds in one position, if it can be acquired at all, influences perception of sounds in that position more strongly than of sounds in another position.
  • Jesse, A., McQueen, J. M., & Page, M. (2007). The locus of talker-specific effects in spoken-word recognition. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1921-1924). Dudweiler: Pirrot.

    Abstract

    Words repeated in the same voice are better recognized than when they are repeated in a different voice. Such findings have been taken as evidence for the storage of talker-specific lexical episodes. But results on perceptual learning suggest that talker-specific adjustments concern sublexical representations. This study thus investigates whether voice-specific repetition effects in auditory lexical decision are lexical or sublexical. The same critical set of items in Block 2 were, depending on materials in Block 1, either same-voice or different-voice word repetitions, new words comprising re-orderings of phonemes used in the same voice in Block 1, or new words with previously unused phonemes. Results show a benefit for words repeated by the same talker, and a smaller benefit for words consisting of phonemes repeated by the same talker. Talker-specific information thus appears to influence word recognition at multiple representational levels.
  • Jesse, A., & McQueen, J. M. (2007). Visual lexical stress information in audiovisual spoken-word recognition. In J. Vroomen, M. Swerts, & E. Krahmer (Eds.), Proceedings of the International Conference on Auditory-Visual Speech Processing 2007 (pp. 162-166). Tilburg: University of Tilburg.

    Abstract

    Listeners use suprasegmental auditory lexical stress information to resolve the competition words engage in during spoken-word recognition. The present study investigated whether (a) visual speech provides lexical stress information, and, more importantly, (b) whether this visual lexical stress information is used to resolve lexical competition. Dutch word pairs that differ in the lexical stress realization of their first two syllables, but not segmentally (e.g., 'OCtopus' and 'okTOber'; capitals marking primary stress) served as auditory-only, visual-only, and audiovisual speech primes. These primes either matched (e.g., 'OCto-'), mismatched (e.g., 'okTO-'), or were unrelated to (e.g., 'maCHI-') a subsequent printed target (octopus), which participants had to make a lexical decision to. To the degree that visual speech contains lexical stress information, lexical decisions to printed targets should be modulated through the addition of visual speech. Results show, however, no evidence for a role of visual lexical stress information in audiovisual spoken-word recognition.
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
  • Jin, H., Wang, Q., Yang, Y.-F., Zhang, H., Gao, M. (., Jin, S., Chen, Y. (., Xu, T., Zheng, Y.-R., Chen, J., Xiao, Q., Yang, J., Wang, X., Geng, H., Ge, J., Wang, W.-W., Chen, X., Zhang, L., Zuo, X.-N., & Chuan-Peng, H. (2023). The Chinese Open Science Network (COSN): Building an open science community from scratch. Advances in Methods and Practices in Psychological Science, 6(1): 10.1177/25152459221144986. doi:10.1177/25152459221144986.

    Abstract

    Open Science is becoming a mainstream scientific ideology in psychology and related fields. However, researchers, especially early-career researchers (ECRs) in developing countries, are facing significant hurdles in engaging in Open Science and moving it forward. In China, various societal and cultural factors discourage ECRs from participating in Open Science, such as the lack of dedicated communication channels and the norm of modesty. To make the voice of Open Science heard by Chinese-speaking ECRs and scholars at large, the Chinese Open Science Network (COSN) was initiated in 2016. With its core values being grassroots-oriented, diversity, and inclusivity, COSN has grown from a small Open Science interest group to a recognized network both in the Chinese-speaking research community and the international Open Science community. So far, COSN has organized three in-person workshops, 12 tutorials, 48 talks, and 55 journal club sessions and translated 15 Open Science-related articles and blogs from English to Chinese. Currently, the main social media account of COSN (i.e., the WeChat Official Account) has more than 23,000 subscribers, and more than 1,000 researchers/students actively participate in the discussions on Open Science. In this article, we share our experience in building such a network to encourage ECRs in developing countries to start their own Open Science initiatives and engage in the global Open Science movement. We foresee great collaborative efforts of COSN together with all other local and international networks to further accelerate the Open Science movement.
  • Jodzio, A., Piai, V., Verhagen, L., Cameron, I., & Indefrey, P. (2023). Validity of chronometric TMS for probing the time-course of word production: A modified replication. Cerebral Cortex, 33(12), 7816-7829. doi:10.1093/cercor/bhad081.

    Abstract

    In the present study, we used chronometric TMS to probe the time-course of 3 brain regions during a picture naming task. The left inferior frontal gyrus, left posterior middle temporal gyrus, and left posterior superior temporal gyrus were all separately stimulated in 1 of 5 time-windows (225, 300, 375, 450, and 525 ms) from picture onset. We found posterior temporal areas to be causally involved in picture naming in earlier time-windows, whereas all 3 regions appear to be involved in the later time-windows. However, chronometric TMS produces nonspecific effects that may impact behavior, and furthermore, the time-course of any given process is a product of both the involved processing stages along with individual variation in the duration of each stage. We therefore extend previous work in the field by accounting for both individual variations in naming latencies and directly testing for nonspecific effects of TMS. Our findings reveal that both factors influence behavioral outcomes at the group level, underlining the importance of accounting for individual variations in naming latencies, especially for late processing stages closer to articulation, and recognizing the presence of nonspecific effects of TMS. The paper advances key considerations and avenues for future work using chronometric TMS to study overt production.
  • Joergens, S., Kleiser, R., & Indefrey, P. (2007). Handedness and fMRI-activation patterns in sentence processing. NeuroReport, 18(13), 1339-1343.

    Abstract

    We investigate differences of cerebral activation in 12 right-handed and left-handed participants, respectively, using a sentence-processing task. Functional MRI shows activation of left-frontal and inferior-parietal speech areas (BA 44, BA9, BA 40) in both groups, but a stronger bilateral activation in left-handers. Direct group comparison reveals a stronger activation in right-frontal cortex (BA 47, BA 6) and left cerebellum in left-handers. Laterality indices for the inferior-frontal cortex are less asymmetric in left-handers and are not related to the degree of handedness. Thus, our results show that sentence-processing induced enhanced activation involving a bilateral network in left-handed participants.
  • Johns, T. G., Perera, R. M., Vernes, S. C., Vitali, A. A., Cao, D. X., Cavenee, W. K., Scott, A. M., & Furnari, F. B. (2007). The efficacy of epidermal growth factor receptor-specific antibodies against glioma xenografts is influenced by receptor levels, activation status, and heterodimerization. Clinical Cancer Research, 13, 1911-1925. doi:10.1158/1078-0432.CCR-06-1453.

    Abstract

    Purpose: Factors affecting the efficacy of therapeutic monoclonal antibodies (mAb) directed to the epidermal growth factor receptor (EGFR) remain relatively unknown, especially in glioma. Experimental Design: We examined the efficacy of two EGFR-specific mAbs (mAbs 806 and 528) against U87MG-derived glioma xenografts expressing EGFR variants. Using this approach allowed us to change the form of the EGFR while keeping the genetic background constant. These variants included the de2-7 EGFR (or EGFRvIII), a constitutively active mutation of the EGFR expressed in glioma. Results: The efficacy of the mAbs correlated with EGFR number; however, the most important factor was receptor activation. Whereas U87MG xenografts expressing the de2-7 EGFR responded to therapy, those exhibiting a dead kinase de2-7 EGFR were refractory. A modified de2-7 EGFR that was kinase active but autophosphorylation deficient also responded, suggesting that these mAbs function in de2-7 EGFR–expressing xenografts by blocking transphosphorylation. Because de2-7 EGFR–expressing U87MG xenografts coexpress the wild-type EGFR, efficacy of the mAbs was also tested against NR6 xenografts that expressed the de2-7 EGFR in isolation. Whereas mAb 806 displayed antitumor activity against NR6 xenografts, mAb 528 therapy was ineffective, suggesting that mAb 528 mediates its antitumor activity by disrupting interactions between the de2-7 and wild-type EGFR. Finally, genetic disruption of Src in U87MG xenografts expressing the de2-7 EGFR dramatically enhanced mAb 806 efficacy. Conclusions: The effective use of EGFR-specific antibodies in glioma will depend on identifying tumors with activated EGFR. The combination of EGFR and Src inhibitors may be an effective strategy for the treatment of glioma.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2000). The development of word recognition: The use of the possible-word constraint by 12-month-olds. In L. Gleitman, & A. Joshi (Eds.), Proceedings of CogSci 2000 (pp. 1034). London: Erlbaum.
  • Jordan, F. (2007). A comparative phylogenetic approach to Austronesian cultural evolution. PhD Thesis, University College London, London.
  • Jordan, F. (2007). Engaging in chit-chat (and all that). [Review of the book Why we talk: The evolutionary origins of language by Jean-Louis Dessalles]. Journal of Evolutionary Psychology, 5(1-4), 241-244. doi:10.1556/JEP.2007.1014.
  • Jordanoska, I. (2023). Focus marking and size in some Mande and Atlantic languages. In N. Sumbatova, I. Kapitonov, M. Khachaturyan, S. Oskolskaya, & V. Verhees (Eds.), Songs and Trees: Papers in Memory of Sasha Vydrina (pp. 311-343). St. Petersburg: Institute for Linguistic Studies and Russian Academy of Sciences.

    Abstract

    This paper compares the focus marking systems and the focus size that can be expressed by the different focus markings in four Mande and three Atlantic languages and varieties, namely: Bambara, Dyula, Kakabe, Soninke (Mande), Wolof, Jóola Foñy and Jóola Karon (Atlantic). All of these languages are known to mark focus morphosyntactically, rather than prosodically, as the more well-studied Germanic languages do. However, the Mande languages under discussion use only morphology, in the form of a particle that follows the focus, while the Atlantic ones use a more complex morphosyntactic system in which focus is marked by morphology in the verbal complex and movement of the focused term. It is shown that while there are some syntactic restrictions to how many different focus sizes can be marked in a distinct way, there is also a certain degree of arbitrariness as to which focus sizes are marked in the same way as each other.
  • Jordanoska, I., Kocher, A., & Bendezú-Araujo, R. (2023). Introduction special issue: Marking the truth: A cross-linguistic approach to verum. Zeitschrift für Sprachwissenschaft, 42(3), 429-442. doi:10.1515/zfs-2023-2012.

    Abstract

    This special issue focuses on the theoretical and empirical underpinnings of truth-marking. The names that have been used to refer to this phenomenon include, among others, counter-assertive focus, polar(ity) focus, verum focus, emphatic polarity or simply verum. This terminological variety is suggestive of the wide range of ideas and conceptions that characterizes this research field. This collection aims to get closer to the core of what truly constitutes verum. We want to expand the empirical base and determine the common and diverging properties of truth-marking in the languages of the world. The objective is to set a theoretical and empirical baseline for future research on verum and related phenomena.
  • Jordanoska, I., Kocher, A., & Bendezú-Araujo, R. (Eds.). (2023). Marking the truth: A cross-linguistic approach to verum [Special Issue]. Zeitschrift für Sprachwissenschaft, 42(3).
  • Jordens, P. (1998). Defaultformen des Präteritums. Zum Erwerb der Vergangenheitsmorphologie im Niederlänidischen. In H. Wegener (Ed.), Eine zweite Sprache lernen (pp. 61-88). Tübingen, Germany: Verlag Gunter Narr.
  • Kałamała, P., Chuderski, A., Szewczyk, J., Senderecka, M., & Wodniecka, Z. (2023). Bilingualism caught in a net: A new approach to understanding the complexity of bilingual experience. Journal of Experimental Psychology: General, 152(1), 157-174. doi:10.1037/xge0001263.

    Abstract

    The growing importance of research on bilingualism in psychology and neuroscience motivates the need for a psychometric model that can be used to understand and quantify this phenomenon. This research is the first to meet this need. We reanalyzed two data sets (N = 171 and N = 112) from relatively young adult language-unbalanced bilinguals and asked whether bilingualism is best described by the factor structure or by the network structure. The factor and network models were established on one data set and then validated on the other data set in a fully confirmatory manner. The network model provided the best fit to the data. This implies that bilingualism should be conceptualized as an emergent phenomenon arising from direct and idiosyncratic dependencies among the history of language acquisition, diverse language skills, and language-use practices. These dependencies can be reduced to neither a single universal quotient nor to some more general factors. Additional in-depth network analyses showed that the subjective perception of proficiency along with language entropy and language mixing were the most central indices of bilingualism, thus indicating that these measures can be especially sensitive to variation in the overall bilingual experience. Overall, this work highlights the great potential of psychometric network modeling to gain a more accurate description and understanding of complex (psycho)linguistic and cognitive phenomena.
  • Kanakanti, M., Singh, S., & Shrivastava, M. (2023). MultiFacet: A multi-tasking framework for speech-to-sign language generation. In E. André, M. Chetouani, D. Vaufreydaz, G. Lucas, T. Schultz, L.-P. Morency, & A. Vinciarelli (Eds.), ICMI '23 Companion: Companion Publication of the 25th International Conference on Multimodal Interaction (pp. 205-213). New York: ACM. doi:10.1145/3610661.3616550.

    Abstract

    Sign language is a rich form of communication, uniquely conveying meaning through a combination of gestures, facial expressions, and body movements. Existing research in sign language generation has predominantly focused on text-to-sign pose generation, while speech-to-sign pose generation remains relatively underexplored. Speech-to-sign language generation models can facilitate effective communication between the deaf and hearing communities. In this paper, we propose an architecture that utilises prosodic information from speech audio and semantic context from text to generate sign pose sequences. In our approach, we adopt a multi-tasking strategy that involves an additional task of predicting Facial Action Units (FAUs). FAUs capture the intricate facial muscle movements that play a crucial role in conveying specific facial expressions during sign language generation. We train our models on an existing Indian Sign language dataset that contains sign language videos with audio and text translations. To evaluate our models, we report Dynamic Time Warping (DTW) and Probability of Correct Keypoints (PCK) scores. We find that combining prosody and text as input, along with incorporating facial action unit prediction as an additional task, outperforms previous models in both DTW and PCK scores. We also discuss the challenges and limitations of speech-to-sign pose generation models to encourage future research in this domain. We release our models, results and code to foster reproducibility and encourage future research1.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2023). Late sign language exposure does not modulate the relation between spatial language and spatial memory in deaf children and adults. Memory & Cognition, 51, 582-600. doi:10.3758/s13421-022-01281-7.

    Abstract

    Prior work with hearing children acquiring a spoken language as their first language shows that spatial language and cognition are related systems and spatial language use predicts spatial memory. Here, we further investigate the extent of this relationship in signing deaf children and adults and ask if late sign language exposure, as well as the frequency and the type of spatial language use that might be affected by late exposure, modulate subsequent memory for spatial relations. To do so, we compared spatial language and memory of 8-year-old late-signing children (after 2 years of exposure to a sign language at the school for the deaf) and late-signing adults to their native-signing counterparts. We elicited picture descriptions of Left-Right relations in Turkish Sign Language (Türk İşaret Dili) and measured the subsequent recognition memory accuracy of the described pictures. Results showed that late-signing adults and children were similar to their native-signing counterparts in how often they encoded the spatial relation. However, late-signing adults but not children differed from their native-signing counterparts in the type of spatial language they used. However, neither late sign language exposure nor the frequency and type of spatial language use modulated spatial memory accuracy. Therefore, even though late language exposure seems to influence the type of spatial language use, this does not predict subsequent memory for spatial relations. We discuss the implications of these findings based on the theories concerning the correspondence between spatial language and cognition as related or rather independent systems.
  • Kaspi, A., Hildebrand, M. S., Jackson, V. E., Braden, R., Van Reyk, O., Howell, T., Debono, S., Lauretta, M., Morison, L., Coleman, M. J., Webster, R., Coman, D., Goel, H., Wallis, M., Dabscheck, G., Downie, L., Baker, E. K., Parry-Fielder, B., Ballard, K., Harrold, E. and 10 moreKaspi, A., Hildebrand, M. S., Jackson, V. E., Braden, R., Van Reyk, O., Howell, T., Debono, S., Lauretta, M., Morison, L., Coleman, M. J., Webster, R., Coman, D., Goel, H., Wallis, M., Dabscheck, G., Downie, L., Baker, E. K., Parry-Fielder, B., Ballard, K., Harrold, E., Ziegenfusz, S., Bennett, M. F., Robertson, E., Wang, L., Boys, A., Fisher, S. E., Amor, D. J., Scheffer, I. E., Bahlo, M., & Morgan, A. T. (2023). Genetic aetiologies for childhood speech disorder: Novel pathways co-expressed during brain development. Molecular Psychiatry, 28, 1647-1663. doi:10.1038/s41380-022-01764-8.

    Abstract

    Childhood apraxia of speech (CAS), the prototypic severe childhood speech disorder, is characterized by motor programming and planning deficits. Genetic factors make substantive contributions to CAS aetiology, with a monogenic pathogenic variant identified in a third of cases, implicating around 20 single genes to date. Here we aimed to identify molecular causation in 70 unrelated probands ascertained with CAS. We performed trio genome sequencing. Our bioinformatic analysis examined single nucleotide, indel, copy number, structural and short tandem repeat variants. We prioritised appropriate variants arising de novo or inherited that were expected to be damaging based on in silico predictions. We identified high confidence variants in 18/70 (26%) probands, almost doubling the current number of candidate genes for CAS. Three of the 18 variants affected SETBP1, SETD1A and DDX3X, thus confirming their roles in CAS, while the remaining 15 occurred in genes not previously associated with this disorder. Fifteen variants arose de novo and three were inherited. We provide further novel insights into the biology of child speech disorder, highlighting the roles of chromatin organization and gene regulation in CAS, and confirm that genes involved in CAS are co-expressed during brain development. Our findings confirm a diagnostic yield comparable to, or even higher, than other neurodevelopmental disorders with substantial de novo variant burden. Data also support the increasingly recognised overlaps between genes conferring risk for a range of neurodevelopmental disorders. Understanding the aetiological basis of CAS is critical to end the diagnostic odyssey and ensure affected individuals are poised for precision medicine trials.
  • Kelly, S. D., & Ozyurek, A. (Eds.). (2007). Gesture, language, and brain [Special Issue]. Brain and Language, 101(3).
  • Kempen, G., Anbeek, G., Desain, P., Konst, L., & De Smedt, K. (1987). Auteursomgevingen: Vijfde-generatie tekstverwerkers. Informatie, 29, 988-993.
  • Kempen, G., Anbeek, G., Desain, P., Konst, L., & De Semdt, K. (1987). Author environments: Fifth generation text processors. In Commission of the European Communities. Directorate-General for Telecommunications, Information Industries, and Innovation (Ed.), Esprit'86: Results and achievements (pp. 365-372). Amsterdam: Elsevier Science Publishers.
  • Kempen, G., Anbeek, G., Desain, P., Konst, L., & De Smedt, K. (1987). Author environments: Fifth generation text processors. In Commission of the European Communities. Directorate-General for Telecommunications, Information Industries, and Innovation (Ed.), Esprit'86: Results and achievements (pp. 365-372). Amsterdam: Elsevier Science Publishers.
  • Kempen, G. (1991). Conjunction reduction and gapping in clause-level coordination: An inheritance-based approach. Computational Intelligence, 7, 357-360. doi:10.1111/j.1467-8640.1991.tb00406.x.
  • Kempen, G. (1976). De taalgebruiker in de mens: Een uitzicht over de taalpsychologie. Groningen: H.D. Tjeenk Willink.
  • Kempen, G. (1998). Comparing and explaining the trajectories of first and second language acquisition: In search of the right mix of psychological and linguistic factors [Commentory]. Bilingualism: Language and Cognition, 1, 29-30. doi:10.1017/S1366728998000066.

    Abstract

    When you compare the behavior of two different age groups which are trying to master the same sensori-motor or cognitive skill, you are likely to discover varying learning routes: different stages, different intervals between stages, or even different orderings of stages. Such heterogeneous learning trajectories may be caused by at least six different types of factors: (1) Initial state: the kinds and levels of skills the learners have available at the onset of the learning episode. (2) Learning mechanisms: rule-based, inductive, connectionist, parameter setting, and so on. (3) Input and feedback characteristics: learning stimuli, information about success and failure. (4) Information processing mechanisms: capacity limitations, attentional biases, response preferences. (5) Energetic variables: motivation, emotional reactions. (6) Final state: the fine-structure of kinds and levels of subskills at the end of the learning episode. This applies to language acquisition as well. First and second language learners probably differ on all six factors. Nevertheless, the debate between advocates and opponents of the Fundamental Difference Hypothesis concerning L1 and L2 acquisition have looked almost exclusively at the first two factors. Those who believe that L1 learners have access to Universal Grammar whereas L2 learners rely on language processing strategies, postulate different learning mechanisms (UG parameter setting in L1, more general inductive strategies in L2 learning). Pienemann opposes this view and, based on his Processability Theory, argues that L1 and L2 learners start out from different initial states: they come to the grammar learning task with different structural hypotheses (SOV versus SVO as basic word order of German).
  • Kempen, G., & Hoenkamp, E. (1987). An incremental procedural grammar for sentence formulation. Cognitive Science, 11(2), 201-258.

    Abstract

    This paper presents a theory of the syntactic aspects of human sentence production. An important characteristic of unprepared speech is that overt pronunciation of a sentence can be initiated before the speaker has completely worked out the meaning content he or she is going to express in that sentence. Apparently, the speaker is able to build up a syntactically coherent utterance out of a series of syntactic fragments each rendering a new part of the meaning content. This incremental, left-to-right mode of sentence production is the central capability of the proposed Incremental Procedural Grammar (IPG). Certain other properties of spontaneous speech, as derivable from speech errors, hesitations, self-repairs, and language pathology, are accounted for as well. The psychological plausibility thus gained by the grammar appears compatible with a satisfactory level of linguistic plausibility in that sentences receive structural descriptions which are in line with current theories of grammar. More importantly, an explanation for the existence of configurational conditions on transformations and other linguistics rules is proposed. The basic design feature of IPG which gives rise to these psychologically and linguistically desirable properties, is the “Procedures + Stack” concept. Sentences are built not by a central constructing agency which overlooks the whole process but by a team of syntactic procedures (modules) which work-in parallel-on small parts of the sentence, have only a limited overview, and whose sole communication channel is a stock. IPG covers object complement constructions, interrogatives, and word order in main and subordinate clauses. It handles unbounded dependencies, cross-serial dependencies and coordination phenomena such as gapping and conjunction reduction. It is also capable of generating self-repairs and elliptical answers to questions. IPG has been implemented as an incremental Dutch sentence generator written in LISP.
  • Kempen, G. (1971). [Review of the book General Psychology by N. Dember and J.J. Jenkins]. Nijmeegs Tijdschrift voor Psychologie, 19, 132-133.
  • Kempen, G., & Harbusch, K. (1998). A 'tree adjoining' grammar without adjoining: The case of scrambling in German. In Fourth International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+4).
  • Kempen, G. (2000). Could grammatical encoding and grammatical decoding be subserved by the same processing module? Behavioral and Brain Sciences, 23, 38-39.
  • Kempen, G. (2007). De kunst van het weglaten: Elliptische nevenschikking in een model van de spreker. In F. Moerdijk, A. van Santen, & R. Tempelaars (Eds.), Leven met woorden: Afscheidsbundel voor Piet van Sterkenburg (pp. 397-407). Leiden: Brill.

    Abstract

    This paper is an abridged version (in Dutch) of an in-press article by the same author (Kempen, G. (2008/9). Clausal coordination and coordinate ellipsis in a model of the speaker. To be published in: Linguistics). The two papers present a psycholinguistically inspired approach to the syntax of clause-level coordination and coordinate ellipsis. It departs from the assumption that coordinations are structurally similar to so-called appropriateness repairs Ñ an important type of self-repairs in spontaneous speech. Coordinate structures and appropriateness repairs can both be viewed as ÒupdateÓ con-structions. Updating is defined as a special sentence production mode that efficiently revises or augments existing sentential structure in response to modifications in the speakerÕs communicative intention. This perspective is shown to offer an empirically satisfactory and theoretically parsimonious account of two prominent types of coordinate ellipsis, in particular Forward Conjunction Reduction (FCR) and Gapping (including Long-Distance Gapping and Subgapping). They are analyzed as different manifestations of Òincremental updatingÓ Ñ efficient updating of only part of the existing sentential structure. Based on empirical data from Dutch and German, novel treatments are proposed for both types of clausal coordinate ellipsis. Two other forms of coordinate ellipsis Ñ SGF (ÒSubject Gap in Finite clauses with fronted verbÓ), and Backward Conjunction Reduction (BCR; also known as Right Node Raising or RNR) Ñ are shown to be incompatible with the notion of incremental updating. Alternative theoretical interpretations of these phenomena are proposed. The four types of clausal coordinate ellipsis Ñ SGF, Gapping, FCR and BCR Ñ are argued to originate in four different stages of sentence production: Intending (i.e. preparing the communicative intention), Conceptualization, Grammatical Encoding, and Phonological Encoding, respectively.
  • Kempen, G., & De Vroomen, P. (Eds.). (1991). Informatiewetenschap 1991: Wetenschappelijke bijdragen aan de eerste STINFON-conferentie. Leiden: STINFON.
  • Kempen, G. (1971). Het onthouden van eenvoudige zinnen met zijn en hebben als werkwoorden: Een experiment met steekwoordreaktietijden. Nijmeegs Tijdschrift voor Psychologie, 19, 262-274.
  • Kempen, G. (Ed.). (1987). Natural language generation: New results in artificial intelligence, psychology and linguistics. Dordrecht: Nijhoff.
  • Kempen, G. (Ed.). (1987). Natuurlijke taal en kunstmatige intelligentie: Taal tussen mens en machine. Groningen: Wolters-Noordhoff.
  • Kempen, G. (1971). Opslag van woordbetekenissen in het semantisch geheugen. Nijmeegs Tijdschrift voor Psychologie, 19, 36-50.
  • Kempen, G. (1987). Tekstverwerking: De vijfde generatie. Informatie, 29, 402-406.
  • Kempen, G. (1998). Sentence parsing. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 213-228). Berlin: Springer.
  • Kempen, G. (1976). Syntactic constructions as retrieval plans. British Journal of Psychology, 67(2), 149-160. doi:10.1111/j.2044-8295.1976.tb01505.x.

    Abstract

    Four probe latency experiments show that the ‘constituent boundary effect’ (transitions between constituents are more difficult than within constituents) is a retrieval and not a storage phenomenon. The experimental logic used is called paraphrastic reproduction: after verbatim memorization of some sentences, subjects were instructed to reproduce them both in their original wording and in the form of sentences that, whilst preserving the original meaning, embodied different syntactic constructions. Syntactic constructions are defined as pairs which consist of a pattern of conceptual information and a syntactic scheme, i.e. a sequence of syntactic word categories and function words. For example, the sequence noun + finite intransitive main verb (‘John runs’) expresses a conceptual actor-action relationship. It is proposed that for each overlearned and simple syntactic construction there exists a retrieval plan which does the following. It searches through the long-term memory information that has been designated as the conceptual content of the utterance(s) to be produced, looking for a token of its conceptual pattern. The retrieved information is then cast into the format of its syntactic scheme. The organization of such plans is held responsible for the constituent boundary effect.
  • Kendrick, K. H., Holler, J., & Levinson, S. C. (2023). Turn-taking in human face-to-face interaction is multimodal: Gaze direction and manual gestures aid the coordination of turn transitions. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210473. doi:10.1098/rstb.2021.0473.

    Abstract

    Human communicative interaction is characterized by rapid and precise turn-taking. This is achieved by an intricate system that has been elucidated in the field of conversation analysis, based largely on the study of the auditory signal. This model suggests that transitions occur at points of possible completion identified in terms of linguistic units. Despite this, considerable evidence exists that visible bodily actions including gaze and gestures also play a role. To reconcile disparate models and observations in the literature, we combine qualitative and quantitative methods to analyse turn-taking in a corpus of multimodal interaction using eye-trackers and multiple cameras. We show that transitions seem to be inhibited when a speaker averts their gaze at a point of possible turn completion, or when a speaker produces gestures which are beginning or unfinished at such points. We further show that while the direction of a speaker's gaze does not affect the speed of transitions, the production of manual gestures does: turns with gestures have faster transitions. Our findings suggest that the coordination of transitions involves not only linguistic resources but also visual gestural ones and that the transition-relevance places in turns are multimodal in nature.

    Additional information

    supplemental material
  • Kennaway, J., Glauert, J., & Zwitserlood, I. (2007). Providing Signed Content on the Internet by Synthesized Animation. ACM Transactions on Computer-Human Interaction (TOCHI), 14(3), 15. doi:10.1145/1279700.1279705.

    Abstract

    Written information is often of limited accessibility to deaf people who use sign language. The eSign project was undertaken as a response to the need for technologies enabling efficient production and distribution over the Internet of sign language content. By using an avatar-independent scripting notation for signing gestures and a client-side web browser plug-in to translate this notation into motion data for an avatar, we achieve highly efficient delivery of signing, while avoiding the inflexibility of video or motion capture. Tests with members of the deaf community have indicated that the method can provide an acceptable quality of signing.
  • Kerkhofs, R., Vonk, W., Schriefers, H., & Chwilla, D. J. (2007). Discourse, syntax, and prosody: The brain reveals an immediate interaction. Journal of Cognitive Neuroscience, 19(9), 1421-1434. doi:10.1162/jocn.2007.19.9.1421.

    Abstract

    Speech is structured into parts by syntactic and prosodic breaks. In locally syntactic ambiguous sentences, the detection of a syntactic break necessarily follows detection of a corresponding prosodic break, making an investigation of the immediate interplay of syntactic and prosodic information impossible when studying sentences in isolation. This problem can be solved, however, by embedding sentences in a discourse context that induces the expectation of either the presence or the absence of a syntactic break right at a prosodic break. Event-related potentials (ERPs) were compared to acoustically identical sentences in these different contexts. We found in two experiments that the closure positive shift, an ERP component known to be elicited by prosodic breaks, was reduced in size when a prosodic break was aligned with a syntactic break. These results establish that the brain matches prosodic information against syntactic information immediately.
  • Khemlani, S., Leslie, S.-J., Glucksberg, S., & Rubio-Fernández, P. (2007). Do ducks lay eggs? How people interpret generic assertions. In D. S. McNamara, & J. G. Trafton (Eds.), Proceedings of the 29th Annual Conference of the Cognitive Science Society (CogSci 2007). Austin, TX: Cognitive Science Society.
  • Kholodova, A., Peter, M., Rowland, C. F., Jacob, G., & Allen, S. E. M. (2023). Abstract priming and the lexical boost effect across development in a structurally biased language. Languages, 8: 264. doi:10.3390/languages8040264.

    Abstract

    The present study investigates the developmental trajectory of abstract representations for syntactic structures in children. In a structural priming experiment on the dative alternation in German, we primed children from three different age groups (3–4 years, 5–6 years, 7–8 years) and adults with double object datives (Dora sent Boots the rabbit) or prepositional object datives (Dora sent the rabbit to Boots). Importantly, the prepositional object structure in German is dispreferred and only rarely encountered by young children. While immediate as well as cumulative structural priming effects occurred across all age groups, these effects were strongest in the 3- to 4-year-old group and gradually decreased with increasing age. These results suggest that representations in young children are less stable than in adults and, therefore, more susceptible to adaptation both immediately and across time, presumably due to stronger surprisal. Lexical boost effects, in contrast, were not present in 3- to 4-year-olds but gradually emerged with increasing age, possibly due to limited working-memory capacity in the younger child groups.
  • Kidd, E., Arciuli, J., Christiansen, M. H., & Smithson, M. (2023). The sources and consequences of individual differences in statistical learning for language development. Cognitive Development, 66: 101335. doi:10.1016/j.cogdev.2023.101335.

    Abstract

    Statistical learning (SL)—sensitivity to statistical regularities in the environment—has been postulated to support language development. While even young infants are capable of using distributional statistics to learn in linguistic and non-linguistic domains, efforts to measure SL at the level of the individual and link it to language proficiency in individual differences designs have been mixed, which has at least in part been attributed to problems with task reliability. In the current study we present the first prospective longitudinal study of the relationship between both non-linguistic SL (measured with visual stimuli) and linguistic SL (measured with auditory stimuli) and language in a group of English-speaking children. One-hundred and twenty-one (N = 121) children in their first two years of formal schooling (Mage = 6;1 years, Range: 5;2 – 7;2) completed tests of visual SL (VSL) and auditory SL (ASL) and several control variables at time 1. Both forms of SL were then measured every 6 months for the next 18 months, and at the final testing session (time 4) their language proficiency was measured using a standardised test. The results showed that the reliability of the SL tasks increased across the course of the study. A series of path analyses showed that both VSL and ASL independently predicted individual differences in language proficiency at time 4. The evidence is consistent with the suggestion that, when measured reliably, an observable relationship between SL and language proficiency exists. Theoretical and methodological issues are discussed.

    Additional information

    data and code
  • Kidd, E., & Bavin, E. L. (2007). Lexical and referential influences on on-line spoken language comprehension: A comparison of adults and primary-school-age children. First Language, 27(1), 29-52. doi:10.1177/0142723707067437.

    Abstract

    This paper reports on two studies investigating children's and adults' processing of sentences containing ambiguity of prepositional phrase (PP) attachment. Study 1 used corpus data to investigate whether cues argued to be used by adults to resolve PP-attachment ambiguities are available in child-directed speech. Study 2 was an on-line reaction time study investigating the role of lexical and referential biases in syntactic ambiguity resolution by children and adults. Forty children (mean age 8;4) and 37 adults listened to V-NP-PP sentences containing temporary ambiguity of PP-attachment. The sentences were manipulated for (i) verb semantics, (ii) the definiteness of the object NP, and (iii) PP-attachment site. The children and adults did not differ qualitatively from each other in their resolution of the ambiguity. A verb semantics by attachment interaction suggested that different attachment analyses were pursued depending on the semantics of the verb. There was no influence of the definiteness of the object NP in either children's or adults' parsing preferences. The findings from the on-line task matched up well with the corpus data, thus identifying a role for the input in the development of parsing strategies.
  • Kidd, E., Brandt, S., Lieven, E., & Tomasello, M. (2007). Object relatives made easy: A cross-linguistic comparison of the constraints influencing young children's processing of relative clauses. Language and Cognitive Processes, 22(6), 860-897. doi:10.1080/01690960601155284.

    Abstract

    We present the results from four studies, two corpora and two experimental, which suggest that English- and German-speaking children (3;1–4;9 years) use multiple constraints to process and produce object relative clauses. Our two corpora studies show that children produce object relatives that reflect the distributional and discourse regularities of the input. Specifically, the results show that when children produce object relatives they most often do so with (a) an inanimate head noun, and (b) a pronominal relative clause subject. Our experimental findings show that children use these constraints to process and produce this construction type. Moreover, when children were required to repeat the object relatives they most often use in naturalistic speech, the subject-object asymmetry in processing of relative clauses disappeared. We also report cross-linguistic differences in children's rate of acquisition which reflect properties of the input language. Overall, our results suggest that children are sensitive to the same constraints on relative clause processing as adults.
  • Kita, S., Ozyurek, A., Allen, S., Brown, A., Furman, R., & Ishizuka, T. (2007). Relations between syntactic encoding and co-speech gestures: Implications for a model of speech and gesture production. Language and Cognitive Processes, 22(8), 1212-1236. doi:10.1080/01690960701461426.

    Abstract

    Gestures that accompany speech are known to be tightly coupled with speech production. However little is known about the cognitive processes that underlie this link. Previous cross-linguistic research has provided preliminary evidence for online interaction between the two systems based on the systematic co-variation found between how different languages syntactically package Manner and Path information of a motion event and how gestures represent Manner and Path. Here we elaborate on this finding by testing whether speakers within the same language gesturally express Manner and Path differently according to their online choice of syntactic packaging of Manner and Path, or whether gestural expression is pre-determined by a habitual conceptual schema congruent with the linguistic typology. Typologically congruent and incongruent syntactic structures for expressing Manner and Path (i.e., in a single clause or multiple clauses) were elicited from English speakers. We found that gestural expressions were determined by the online choice of syntactic packaging rather than by a habitual conceptual schema. It is therefore concluded that speech and gesture production processes interface online at the conceptual planning phase. Implications of the findings for models of speech and gesture production are discussed
  • Kita, S., & Ozyurek, A. (2007). How does spoken language shape iconic gestures? In S. Duncan, J. Cassel, & E. Levy (Eds.), Gesture and the dynamic dimension of language (pp. 67-74). Amsterdam: Benjamins.

Share this page