Publications

Displaying 801 - 900 of 935
  • Smalley, S. L., Kustanovich, V., Minassian, S. L., Stone, J. L., Ogdie, M. N., McGough, J. J., McCracken, J. T., MacPhie, I. L., Francks, C., Fisher, S. E., Cantor, R. M., Monaco, A. P., & Nelson, S. F. (2002). Genetic linkage of Attention-Deficit/Hyperactivity Disorder on chromosome 16p13, in a region implicated in autism. American Journal of Human Genetics, 71(4), 959-963. doi:10.1086/342732.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD) is the most commonly diagnosed behavioral disorder in childhood and likely represents an extreme of normal behavior. ADHD significantly impacts learning in school-age children and leads to impaired functioning throughout the life span. There is strong evidence for a genetic etiology of the disorder, although putative alleles, principally in dopamine-related pathways suggested by candidate-gene studies, have very small effect sizes. We use affected-sib-pair analysis in 203 families to localize the first major susceptibility locus for ADHD to a 12-cM region on chromosome 16p13 (maximum LOD score 4.2; P=.000005), building upon an earlier genomewide scan of this disorder. The region overlaps that highlighted in three genome scans for autism, a disorder in which inattention and hyperactivity are common, and physically maps to a 7-Mb region on 16p13. These findings suggest that variations in a gene on 16p13 may contribute to common deficits found in both ADHD and autism.
  • Smits, R. (1998). A model for dependencies in phonetic categorization. Proceedings of the 16th International Congress on Acoustics and the 135th Meeting of the Acoustical Society of America, 2005-2006.

    Abstract

    A quantitative model of human categorization behavior is proposed, which can be applied to 4-alternative forced-choice categorization data involving two binary classifications. A number of processing dependencies between the two classifications are explicitly formulated, such as the dependence of the location, orientation, and steepness of the class boundary for one classification on the outcome of the other classification. The significance of various types of dependencies can be tested statistically. Analyses of a data set from the literature shows that interesting dependencies in human speech recognition can be uncovered using the model.
  • Smits, R. (2000). Temporal distribution of information for human consonant recognition in VCV utterances. Journal of Phonetics, 28, 111-135. doi:10.006/jpho.2000.0107.

    Abstract

    The temporal distribution of perceptually relevant information for consonant recognition in British English VCVs is investigated. The information distribution in the vicinity of consonantal closure and release was measured by presenting initial and final portions, respectively, of naturally produced VCV utterances to listeners for categorization. A multidimensional scaling analysis of the results provided highly interpretable, four-dimensional geometrical representations of the confusion patterns in the categorization data. In addition, transmitted information as a function of truncation point was calculated for the features manner place and voicing. The effects of speaker, vowel context, stress, and distinctive feature on the resulting information distributions were tested statistically. It was found that, although all factors are significant, the location and spread of the distributions depends principally on the distinctive feature, i.e., the temporal distribution of perceptually relevant information is very different for the features manner, place, and voicing.
  • Snijders Blok, L., Verseput, J., Rots, D., Venselaar, H., Innes, A. M., Stumpel, C., Õunap, K., Reinson, K., Seaby, E. G., McKee, S., Burton, B., Kim, K., Van Hagen, J. M., Waisfisz, Q., Joset, P., Steindl, K., Rauch, A., Li, D., Zackai, E. H., Sheppard, S. E. and 29 moreSnijders Blok, L., Verseput, J., Rots, D., Venselaar, H., Innes, A. M., Stumpel, C., Õunap, K., Reinson, K., Seaby, E. G., McKee, S., Burton, B., Kim, K., Van Hagen, J. M., Waisfisz, Q., Joset, P., Steindl, K., Rauch, A., Li, D., Zackai, E. H., Sheppard, S. E., Keena, B., Hakonarson, H., Roos, A., Kohlschmidt, N., Cereda, A., Iascone, M., Rebessi, E., Kernohan, K. D., Campeau, P. M., Millan, F., Taylor, J. A., Lochmüller, H., Higgs, M. R., Goula, A., Bernhard, B., Velasco, D. J., Schmanski, A. A., Stark, Z., Gallacher, L., Pais, L., Marcogliese, P. C., Yamamoto, S., Raun, N., Jakub, T. E., Kramer, J. M., Den Hoed, J., Fisher, S. E., Brunner, H. G., & Kleefstra, T. (2023). A clustering of heterozygous missense variants in the crucial chromatin modifier WDR5 defines a new neurodevelopmental disorder. Human Genetics and Genomics Advances, 4(1): 100157. doi:10.1016/j.xhgg.2022.100157.

    Abstract

    WDR5 is a broadly studied, highly conserved key protein involved in a wide array of biological functions. Among these functions, WDR5 is a part of several protein complexes that affect gene regulation via post-translational modification of histones. We collected data from 11 unrelated individuals with six different rare de novo germline missense variants in WDR5; one identical variant was found in five individuals, and another variant in two individuals. All individuals had neurodevelopmental disorders including speech/language delays (N=11), intellectual disability (N=9), epilepsy (N=7) and autism spectrum disorder (N=4). Additional phenotypic features included abnormal growth parameters (N=7), heart anomalies (N=2) and hearing loss (N=2). Three-dimensional protein structures indicate that all the residues affected by these variants are located at the surface of one side of the WDR5 protein. It is predicted that five out of the six amino acid substitutions disrupt interactions of WDR5 with RbBP5 and/or KMT2A/C, as part of the COMPASS (complex proteins associated with Set1) family complexes. Our experimental approaches in Drosophila melanogaster and human cell lines show normal protein expression, localization and protein-protein interactions for all tested variants. These results, together with the clustering of variants in a specific region of WDR5 and the absence of truncating variants so far, suggest that dominant-negative or gain-of-function mechanisms might be at play. All in all, we define a neurodevelopmental disorder associated with missense variants in WDR5 and a broad range of features. This finding highlights the important role of genes encoding COMPASS family proteins in neurodevelopmental disorders.
  • Söderström, P., & Cutler, A. (2023). Early neuro-electric indication of lexical match in English spoken-word recognition. PLOS ONE, 18(5): e0285286. doi:10.1371/journal.pone.0285286.

    Abstract

    We investigated early electrophysiological responses to spoken English words embedded in neutral sentence frames, using a lexical decision paradigm. As words unfold in time, similar-sounding lexical items compete for recognition within 200 milliseconds after word onset. A small number of studies have previously investigated event-related potentials in this time window in English and French, with results differing in direction of effects as well as component scalp distribution. Investigations of spoken-word recognition in Swedish have reported an early left-frontally distributed event-related potential that increases in amplitude as a function of the probability of a successful lexical match as the word unfolds. Results from the present study indicate that the same process may occur in English: we propose that increased certainty of a ‘word’ response in a lexical decision task is reflected in the amplitude of an early left-anterior brain potential beginning around 150 milliseconds after word onset. This in turn is proposed to be connected to the probabilistically driven activation of possible upcoming word forms.

    Additional information

    The datasets are available here
  • Soheili-Nezhad, S., Sprooten, E., Tendolkar, I., & Medici, M. (2023). Exploring the genetic link between thyroid dysfunction and common psychiatric disorders: A specific hormonal or a general autoimmune comorbidity. Thyroid, 33(2), 159-168. doi:10.1089/thy.2022.0304.

    Abstract

    Background: The hypothalamus-pituitary-thyroid axis coordinates brain development and postdevelopmental function. Thyroid hormone (TH) variations, even within the normal range, have been associated with the risk of developing common psychiatric disorders, although the underlying mechanisms remain poorly understood.

    Methods: To get new insight into the potentially shared mechanisms underlying thyroid dysfunction and psychiatric disorders, we performed a comprehensive analysis of multiple phenotypic and genotypic databases. We investigated the relationship of thyroid disorders with depression, bipolar disorder (BIP), and anxiety disorders (ANXs) in 497,726 subjects from U.K. Biobank. We subsequently investigated genetic correlations between thyroid disorders, thyrotropin (TSH), and free thyroxine (fT4) levels, with the genome-wide factors that predispose to psychiatric disorders. Finally, the observed global genetic correlations were furthermore pinpointed to specific local genomic regions.

    Results: Hypothyroidism was positively associated with an increased risk of major depressive disorder (MDD; OR = 1.31, p = 5.29 × 10−89), BIP (OR = 1.55, p = 0.0038), and ANX (OR = 1.16, p = 6.22 × 10−8). Hyperthyroidism was associated with MDD (OR = 1.11, p = 0.0034) and ANX (OR = 1.34, p = 5.99 × 10−⁶). Genetically, strong coheritability was observed between thyroid disease and both major depressive (rg = 0.17, p = 2.7 × 10−⁴) and ANXs (rg = 0.17, p = 6.7 × 10−⁶). This genetic correlation was particularly strong at the major histocompatibility complex locus on chromosome 6 (p < 10−⁵), but further analysis showed that other parts of the genome also contributed to this global effect. Importantly, neither TSH nor fT4 levels were genetically correlated with mood disorders.

    Conclusions: Our findings highlight an underlying association between autoimmune hypothyroidism and mood disorders, which is not mediated through THs and in which autoimmunity plays a prominent role. While these findings could shed new light on the potential ineffectiveness of treating (minor) variations in thyroid function in psychiatric disorders, further research is needed to identify the exact underlying molecular mechanisms.

    Additional information

    supplementary table S1
  • Soheili-Nezhad, S., Ibáñez-Solé, O., Izeta, A., Hoeijmakers, J. H. J., & Stoeger, T. (2024). Time is ticking faster for long genes in aging. Trends in Genetics, 40(4), 299-312. doi:10.1016/j.tig.2024.01.009.

    Abstract

    Recent studies of aging organisms have identified a systematic phenomenon, characterized by a negative correlation between gene length and their expression in various cell types, species, and diseases. We term this phenomenon gene-length-dependent transcription decline (GLTD) and suggest that it may represent a bottleneck in the transcription machinery and thereby significantly contribute to aging as an etiological factor. We review potential links between GLTD and key aging processes such as DNA damage and explore their potential in identifying disease modification targets. Notably, in Alzheimer’s disease, GLTD spotlights extremely long synaptic genes at chromosomal fragile sites (CFSs) and their vulnerability to postmitotic DNA damage. We suggest that GLTD is an integral element of biological aging.
  • Sollis, E., Den Hoed, J., Quevedo, M., Estruch, S. B., Vino, A., Dekkers, D. H. W., Demmers, J. A. A., Poot, R., Derizioti, P., & Fisher, S. E. (2023). Characterization of the TBR1 interactome: Variants associated with neurodevelopmental disorders disrupt novel protein interactions. Human Molecular Genetics, 32(9): ddac311, pp. 1497-1510. doi:10.1093/hmg/ddac311.

    Abstract

    TBR1 is a neuron-specific transcription factor involved in brain development and implicated in a neurodevelopmental disorder (NDD) combining features of autism spectrum disorder (ASD), intellectual disability (ID) and speech delay. TBR1 has been previously shown to interact with a small number of transcription factors and co-factors also involved in NDDs (including CASK, FOXP1/2/4 and BCL11A), suggesting that the wider TBR1 interactome may have a significant bearing on normal and abnormal brain development. Here we have identified approximately 250 putative TBR1-interaction partners by affinity purification coupled to mass spectrometry. As well as known TBR1-interactors such as CASK, the identified partners include transcription factors and chromatin modifiers, along with ASD- and ID-related proteins. Five interaction candidates were independently validated using bioluminescence resonance energy transfer assays. We went on to test the interaction of these candidates with TBR1 protein variants implicated in cases of NDD. The assays uncovered disturbed interactions for NDD-associated variants and identified two distinct protein-binding domains of TBR1 that have essential roles in protein–protein interaction.
  • Spinelli, E., Cutler, A., & McQueen, J. M. (2002). Resolution of liaison for lexical access in French. Revue Française de Linguistique Appliquée, 7, 83-96.

    Abstract

    Spoken word recognition involves automatic activation of lexical candidates compatible with the perceived input. In running speech, words abut one another without intervening gaps, and syllable boundaries can mismatch with word boundaries. For instance, liaison in ’petit agneau’ creates a syllable beginning with a consonant although ’agneau’ begins with a vowel. In two cross-modal priming experiments we investigate how French listeners recognise words in liaison environments. These results suggest that the resolution of liaison in part depends on acoustic cues which distinguish liaison from non-liaison consonants, and in part on the availability of lexical support for a liaison interpretation.
  • Srivastava, S., Budwig, N., & Narasimhan, B. (2005). A developmental-functionalist view of the development of transitive and intratransitive constructions in a Hindi-speaking child: A case study. International Journal of Idiographic Science, 2.
  • Stärk, K., Kidd, E., & Frost, R. L. A. (2023). Close encounters of the word kind: Attested distributional information boosts statistical learning. Language Learning, 73(2), 341-373. doi:10.1111/lang.12523.

    Abstract

    Statistical learning, the ability to extract regularities from input (e.g., in language), is likely supported by learners’ prior expectations about how component units co-occur. In this study, we investigated how adults’ prior experience with sublexical regularities in their native language influences performance on an empirical language learning task. Forty German-speaking adults completed a speech repetition task in which they repeated eight-syllable sequences from two experimental languages: one containing disyllabic words comprised of frequently occurring German syllable transitions (naturalistic words) and the other containing words made from unattested syllable transitions (non-naturalistic words). The participants demonstrated learning from both naturalistic and non-naturalistic stimuli. However, learning was superior for the naturalistic sequences, indicating that the participants had used their existing distributional knowledge of German to extract the naturalistic words faster and more accurately than the non-naturalistic words. This finding supports theories of statistical learning as a form of chunking, whereby frequently co-occurring units become entrenched in long-term memory.

    Additional information

    accessible summary appendix S1
  • Stivers, T. (2005). Parent resistance to physicians' treatment recommendations: One resource for initiating a negotiation of the treatment decision. Health Communication, 18(1), 41-74. doi:10.1207/s15327027hc1801_3.

    Abstract

    This article examines pediatrician-parent interaction in the context of acute pediatric encounters for children with upper respiratory infections. Parents and physicians orient to treatment recommendations as normatively requiring parent acceptance for physicians to close the activity. Through acceptance, withholding of acceptance, or active resistance, parents have resources with which to negotiate for a treatment outcome that is in line with their own wants. This article offers evidence that even in acute care, shared decision making not only occurs but, through normative constraints, is mandated for parents and physicians to reach accord in the treatment decision.
  • Stivers, T. (2005). Modified repeats: One method for asserting primary rights from second position. Research on Language and Social Interaction, 38(2), 131-158. doi:10.1207/s15327973rlsi3802_1.

    Abstract

    In this article I examine one practice speakers have for confirming when confirmation was not otherwise relevant. The practice involves a speaker repeating an assertion previously made by another speaker in modified form with stress on the copula/auxiliary. I argue that these modified repeats work to undermine the first speaker's default ownership and rights over the claim and instead assert the primacy of the second speaker's rights to make the statement. Two types of modified repeats are identified: partial and full. Although both involve competing for primacy of the claim, they occur in distinct sequential environments: The former are generally positioned after a first claim was epistemically downgraded, whereas the latter are positioned following initial claims that were offered straightforwardly, without downgrading.
  • Stivers, T., Rossi, G., & Chalfoun, A. (2023). Ambiguities in action ascription. Social Forces, 101(3), 1552-1579. doi:10.1093/sf/soac021.

    Abstract

    In everyday interactions with one another, speakers not only say things but also do things like offer, complain, reject, and compliment. Through observation, it is possible to see that much of the time people unproblematically understand what others are doing. Research on conversation has further documented how speakers’ word choice, prosody, grammar, and gesture all help others to recognize what actions they are performing. In this study, we rely on spontaneous naturally occurring conversational data where people have trouble making their actions understood to examine what leads to ambiguous actions, bringing together prior research and identifying recurrent types of ambiguity that hinge on different dimensions of social action. We then discuss the range of costs and benefits for social actors when actions are clear versus ambiguous. Finally, we offer a conceptual model of how, at a microlevel, action ascription is done. Actions in interaction are building blocks for social relations; at each turn, an action can strengthen or strain the bond between two individuals. Thus, a unified theory of action ascription at a microlevel is an essential component for broader theories of social action and of how social actions produce, maintain, and revise the social world.
  • Stivers, T. (2002). 'Symptoms only' and 'Candidate diagnoses': Presenting the problem in pediatric encounters. Health Communication, 14(3), 299-338.
  • Stivers, T., & Sidnell, J. (2005). Introduction: Multimodal interaction. Semiotica, 156(1/4), 1-20. doi:10.1515/semi.2005.2005.156.1.

    Abstract

    That human social interaction involves the intertwined cooperation of different modalities is uncontroversial. Researchers in several allied fields have, however, only recently begun to document the precise ways in which talk, gesture, gaze, and aspects of the material surround are brought together to form coherent courses of action. The papers in this volume are attempts to develop this line of inquiry. Although the authors draw on a range of analytic, theoretical, and methodological traditions (conversation analysis, ethnography, distributed cognition, and workplace studies), all are concerned to explore and illuminate the inherently multimodal character of social interaction. Recent studies, including those collected in this volume, suggest that different modalities work together not only to elaborate the semantic content of talk but also to constitute coherent courses of action. In this introduction we present evidence for this position. We begin by reviewing some select literature focusing primarily on communicative functions and interactive organizations of specific modalities before turning to consider the integration of distinct modalities in interaction.
  • Stivers, T. (2005). Non-antibiotic treatment recommendations: Delivery formats and implications for parent resistance. Social Science & Medicine, 60(5), 949-964. doi:10.1016/j.socscimed.2004.06.040.

    Abstract

    This study draws on a database of 570 community-based acute pediatric encounters in the USA and uses conversation analysis as a methodology to identify two formats physicians use to recommend non-antibiotic treatment in acute pediatric care (using a subset of 309 cases): recommendations for particular treatment (e.g., “I’m gonna give her some cough medicine.”) and recommendations against particular treatment (e.g., “She doesn’t need any antibiotics.”). The findings are that the presentation of a specific affirmative recommendation for treatment is less likely to engender parent resistance to a non-antibiotic treatment recommendation than a recommendation against particular treatment even if the physician later offers a recommendation for particular treatment. It is suggested that physicians who provide a specific positive treatment recommendation followed by a negative recommendation are most likely to attain parent alignment and acceptance when recommending a non-antibiotic treatment for a viral upper respiratory illness.
  • Stivers, T. (2002). Overt parent pressure for antibiotic medication in pediatric encounters. Social Science and Medicine, 54(7), 1111-1130.
  • Stivers, T. (1998). Prediagnostic commentary in veterinarian-client interaction. Research on Language and Social Interaction, 31(2), 241-277. doi:10.1207/s15327973rlsi3102_4.
  • Stivers, T., Chalfoun, A., & Rossi, G. (2024). To err is human but to persist is diabolical: Toward a theory of interactional policing. Frontiers in Sociology: Sociological Theory, 9: 1369776. doi:10.3389/fsoc.2024.1369776.

    Abstract

    Social interaction is organized around norms and preferences that guide our construction of actions and our interpretation of those of others, creating a reflexive moral order. Sociological theory suggests two possibilities for the type of moral order that underlies the policing of interactional norm and preference violations: a morality that focuses on the nature of violations themselves and a morality that focuses on the positioning of actors as they maintain their conduct comprehensible, even when they depart from norms and preferences. We find that actors are more likely to reproach interactional violations for which an account is not provided by the transgressor, and that actors weakly reproach or let pass first offenses while more strongly policing violators who persist in bad behavior. Based on these findings, we outline a theory of interactional policing that rests not on the nature of the violation but rather on actors' moral positioning.
  • Stolker, C. J. J. M., & Poletiek, F. H. (1998). Smartengeld - Wat zijn we eigenlijk aan het doen? Naar een juridische en psychologische evaluatie. In F. Stadermann (Ed.), Bewijs en letselschade (pp. 71-86). Lelystad, The Netherlands: Koninklijke Vermande.
  • Striano, T., & Liszkowski, U. (2005). Sensitivity to the context of facial expression in the still face at 3-, 6-, and 9-months of age. Infant Behavior and Development, 28(1), 10-19. doi:10.1016/j.infbeh.2004.06.004.

    Abstract

    Thirty-eight 3-, 6-, and 9-month-old infants interacted in a face to face situation with a female stranger who disrupted the on-going interaction with 30 s Happy and Neutral still face episodes. Three- and 6-month-olds manifested a robust still face response for gazing and smiling. For smiling, 9-month-olds manifested a floor effect such that no still face effect could be shown. For gazing, 9-month-olds' still face response was modulated by the context of interaction such that it was less pronounced if a happy still face was presented first. The findings point to a developmental transition by the end of the first year, whereby infants' still face response becomes increasingly influenced by the context of social interaction. (C) 2004 Published by Elsevier Inc. [References: 35]
  • Suppes, P., Böttner, M., & Liang, L. (1998). Machine Learning of Physics Word Problems: A Preliminary Report. In A. Aliseda, R. van Glabbeek, & D. Westerståhl (Eds.), Computing Natural Language (pp. 141-154). Stanford, CA, USA: CSLI Publications.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1998). Understanding ambiguous words in sentence contexts: Electrophysiological evidence for delayed contextual selection in Broca's aphasia. Neuropsychologia, 36(8), 737-761. doi:10.1016/S0028-3932(97)00174-7.

    Abstract

    This study investigates whether spoken sentence comprehension deficits in Broca's aphasics results from their inability to access the subordinate meaning of ambiguous words (e.g. bank), or alternatively, from a delay in their selection of the contextually appropriate meaning. Twelve Broca's aphasics and twelve elderly controls were presented with lexical ambiguities in three context conditions, each followed by the same target words. In the concordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was related to the target. In the discordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was incompatible with the target.In the unrelated condition, the sentence-final word was unambiguous and unrelated to the target. The task of the subjects was to listen attentively to the stimuli The activational status of the ambiguous sentence-final words was inferred from the amplitude of the N399 to the targets at two inter-stimulus intervals (ISIs) (100 ms and 1250 ms). At the short ISI, the Broca's aphasics showed clear evidence of activation of the subordinate meaning. In contrast to elderly controls, however, the Broca's aphasics were not successful at selecting the appropriate meaning of the ambiguity in the short ISI version of the experiment. But at the long ISI, in accordance with the performance of the elderly controls, the patients were able to successfully complete the contextual selection process. These results indicate that Broca's aphasics are delayed in the process of contextual selection. It is argued that this finding of delayed selection is compatible with the idea that comprehension deficits in Broca's aphasia result from a delay in the process of integrating lexical information.
  • Swift, M. (1998). [Book review of LOUIS-JACQUES DORAIS, La parole inuit: Langue, culture et société dans l'Arctique nord-américain]. Language in Society, 27, 273-276. doi:10.1017/S0047404598282042.

    Abstract

    This volume on Inuit speech follows the evolution of a native language of the North American Arctic, from its historical roots to its present-day linguistic structure and patterns of use from Alaska to Greenland. Drawing on a wide range of research from the fields of linguistics, anthropology, and sociology, Dorais integrates these diverse perspectives in a comprehensive view of native language development, maintenance, and use under conditions of marginalization due to social transition.
  • Swingley, D., & Fernald, A. (2002). Recognition of words referring to present and absent objects by 24-month-olds. Journal of Memory and Language, 46(1), 39-56. doi:10.1006/jmla.2001.2799.

    Abstract

    Three experiments tested young children's efficiency in recognizing words in speech referring to absent objects. Seventy-two 24-month-olds heard sentences containing target words denoting objects that were or were not present in a visual display. Children's eye movements were monitored as they heard the sentences. Three distinct patterns of response were shown. Children hearing a familiar word that was an appropriate label for the currently fixated picture maintained their gaze. Children hearing a familiar word that could not apply to the currently fixated picture rapidly shifted their gaze to the alternative picture, whether that alternative was the named target or not, and then continued to search for an appropriate referent. Finally, children hearing an unfamiliar word shifted their gaze slowly and irregularly. This set of outcomes is interpreted as evidence that by 24 months, rapid activation in word recognition does not depend on the presence of the words' referents. Rather, very young children are capable of quickly and efficiently interpreting words in the absence of visual supporting context.
  • Swingley, D. (2005). Statistical clustering and the contents of the infant vocabulary. Cognitive Psychology, 50(1), 86-132. doi:10.1016/j.cogpsych.2004.06.001.

    Abstract

    Infants parse speech into word-sized units according to biases that develop in the first year. One bias, present before the age of 7 months, is to cluster syllables that tend to co-occur. The present computational research demonstrates that this statistical clustering bias could lead to the extraction of speech sequences that are actual words, rather than missegmentations. In English and Dutch, these word-forms exhibit the strong–weak (trochaic) pattern that guides lexical segmentation after 8 months, suggesting that the trochaic parsing bias is learned as a generalization from statistically extracted bisyllables, and not via attention to short utterances or to high-frequency bisyllables. Extracted word-forms come from various syntactic classes, and exhibit distributional characteristics enabling rudimentary sorting of words into syntactic categories. The results highlight the importance of infants’ first year in language learning: though they may know the meanings of very few words, infants are well on their way to building a vocabulary.
  • Swingley, D. (2005). 11-month-olds' knowledge of how familiar words sounds. Developmental Science, 8(5), 432-443. doi:10.1111/j.1467-7687.2005.00432.

    Abstract

    During the first year of life, infants' perception of speech becomes tuned to the phonology of the native language, as revealed in laboratory discrimination and categorization tasks using syllable stimuli. However, the implications of these results for the development of the early vocabulary remain controversial, with some results suggesting that infants retain only vague, sketchy phonological representations of words. Five experiments using a preferential listening procedure tested Dutch 11-month-olds' responses to word, nonword and mispronounced-word stimuli. Infants listened longer to words than nonwords, but did not exhibit this response when words were mispronounced at onset or at offset. In addition, infants preferred correct pronunciations to onset mispronunciations. The results suggest that infants' encoding of familiar words includes substantial phonological detail.
  • Swingley, D., & Aslin, R. N. (2002). Lexical neighborhoods and the word-form representations of 14-month-olds. Psychological Science, 13(5), 480-484. doi:10.1111/1467-9280.00485.

    Abstract

    The degree to which infants represent phonetic detail in words has been a source of controversy in phonology and developmental psychology. One prominent hypothesis holds that infants store words in a vague or inaccurate form until the learning of similar–sounding neighbors forces attention to subtle phonetic distinctions. In the experiment reported here, we used a visual fixation task to assess word recognition. We present the first evidence indicating that, in fact, the lexical representations of 14– and 15–month–olds are encoded in fine detail, even when this detail is not functionally necessary for distinguishing similar words in the infant’s vocabulary. Exposure to words is sufficient for well–specified lexical representations, even well before the vocabulary spurt. These results suggest developmental continuity in infants’ representations of speech: As infants begin to build a vocabulary and learn word meanings, they use the perceptual abilities previously demonstrated in tasks testing the discrimination and categorization of meaningless syllables.
  • Swingley, D., & Aslin, R. N. (2000). Spoken word recognition and lexical representation in very young children. Cognition, 76, 147-166. doi:10.1016/S0010-0277(00)00081-0.

    Abstract

    Although children's knowledge of the sound patterns of words has been a focus of debate for many years, little is known about the lexical representations very young children use in word recognition. In particular, researchers have questioned the degree of specificity encoded in early lexical representations. The current study addressed this issue by presenting 18–23-month-olds with object labels that were either correctly pronounced, or mispronounced. Mispronunciations involved replacement of one segment with a similar segment, as in ‘baby–vaby’. Children heard sentences containing these words while viewing two pictures, one of which was the referent of the sentence. Analyses of children's eye movements showed that children recognized the spoken words in both conditions, but that recognition was significantly poorer when words were mispronounced. The effects of mispronunciation on recognition were unrelated to age or to spoken vocabulary size. The results suggest that children's representations of familiar words are phonetically well-specified, and that this specification may not be a consequence of the need to differentiate similar words in production.
  • Takashima, A., Carota, F., Schoots, V., Redmann, A., Jehee, J., & Indefrey, P. (2024). Tomatoes are red: The perception of achromatic objects elicits retrieval of associated color knowledge. Journal of Cognitive Neuroscience, 36(1), 24-45. doi:10.1162/jocn_a_02068.

    Abstract

    When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri (“Human V4”) correlated with a representational model encoding the red–green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named.
  • Tamaoka, K., Sakai, H., Miyaoka, Y., Ono, H., Fukuda, M., Wu, Y., & Verdonschot, R. G. (2023). Sentential inference bridging between lexical/grammatical knowledge and text comprehension among native Chinese speakers learning Japanese. PLoS One, 18(4): e0284331. doi:10.1371/journal.pone.0284331.

    Abstract

    The current study explored the role of sentential inference in connecting lexical/grammatical knowledge and overall text comprehension in foreign language learning. Using structural equation modeling (SEM), causal relationships were examined between four latent variables: lexical knowledge, grammatical knowledge, sentential inference, and text comprehension. The study analyzed 281 Chinese university students learning Japanese as a second language and compared two causal models: (1) the partially-mediated model, which suggests that lexical knowledge, grammatical knowledge, and sentential inference concurrently influence text comprehension, and (2) the wholly-mediated model, which posits that both lexical and grammatical knowledge impact sentential inference, which then further affects text comprehension. The SEM comparison analysis supported the wholly-mediated model, showing sequential causal relationships from lexical knowledge to sentential inference and then to text comprehension, without significant contribution from grammatical knowledge. The results indicate that sentential inference serves as a crucial bridge between lexical knowledge and text comprehension.
  • Tamaoka, K., Zhang, J., Koizumi, M., & Verdonschot, R. G. (2023). Phonological encoding in Tongan: An experimental investigation. Quarterly Journal of Experimental Psychology, 76(10), 2197-2430. doi:10.1177/17470218221138770.

    Abstract

    This study is the first to report chronometric evidence on Tongan language production. It has been speculated that the mora plays an important role during Tongan phonological encoding. A mora follows the (C)V form, so /a/ and /ka/ (but not /k/) denote a mora in Tongan. Using a picture-word naming paradigm, Tongan native speakers named pictures containing superimposed non-word distractors. This task has been used before in Japanese, Korean, and Vietnamese to investigate the initially selected unit during phonological encoding (IPU). Compared to control distractors, both onset and mora overlapping distractors resulted in faster naming latencies. Several alternative explanations for the pattern of results - proficiency in English, knowledge of Latin script, and downstream effects - are discussed. However, we conclude that Tongan phonological encoding likely natively uses the phoneme, and not the mora, as the IPU..

    Additional information

    supplemental material
  • Tamaoka, K., Yu, S., Zhang, J., Otsuka, Y., Lim, H., Koizumi, M., & Verdonschot, R. G. (2024). Syntactic structures in motion: Investigating word order variations in verb-final (Korean) and verb-initial (Tongan) languages. Frontiers in Psychology, 15: 1360191. doi:10.3389/fpsyg.2024.1360191.

    Abstract

    This study explored sentence processing in two typologically distinct languages: Korean, a verb-final language, and Tongan, a verb-initial language. The first experiment revealed that in Korean, sentences arranged in the scrambled OSV (Object, Subject, Verb) order were processed more slowly than those in the canonical SOV order, highlighting a scrambling effect. It also found that sentences with subject topicalization in the SOV order were processed as swiftly as those in the canonical form, whereas sentences with object topicalization in the OSV order were processed with speeds and accuracy comparable to scrambled sentences. However, since topicalization and scrambling in Korean use the same OSV order, independently distinguishing the effects of topicalization is challenging. In contrast, Tongan allows for a clear separation of word orders for topicalization and scrambling, facilitating an independent evaluation of topicalization effects. The second experiment, employing a maze task, confirmed that Tongan’s canonical VSO order was processed more efficiently than the VOS scrambled order, thereby verifying a scrambling effect. The third experiment investigated the effects of both scrambling and topicalization in Tongan, finding that the canonical VSO order was processed most efficiently in terms of speed and accuracy, unlike the VOS scrambled and SVO topicalized orders. Notably, the OVS object-topicalized order was processed as efficiently as the VSO canonical order, while the SVO subject-topicalized order was slower than VSO but faster than VOS. By independently assessing the effects of topicalization apart from scrambling, this study demonstrates that both subject and object topicalization in Tongan facilitate sentence processing, contradicting the predictions based on movement-based anticipation.

    Additional information

    appendix 1-3
  • Tanenhaus, M. K., Magnuson, J. S., Dahan, D., & Chaimbers, G. (2000). Eye movements and lexical access in spoken-language comprehension: evaluating a linking hypothesis between fixations and linguistic processing. Journal of Psycholinguistic Research, 29, 557-580. doi:10.1023/A:1026464108329.

    Abstract

    A growing number of researchers in the sentence processing community are using eye movements to address issues in spoken language comprehension. Experiments using this paradigm have shown that visually presented referential information, including properties of referents relevant to specific actions, influences even the earliest moments of syntactic processing. Methodological concerns about task-specific strategies and the linking hypothesis between eye movements and linguistic processing are identified and discussed. These concerns are addressed in a review of recent studies of spoken word recognition which introduce and evaluate a detailed linking hypothesis between eye movements and lexical access. The results provide evidence about the time course of lexical activation that resolves some important theoretical issues in spoken-word recognition. They also demonstrate that fixations are sensitive to properties of the normal language-processing system that cannot be attributed to task-specific strategies
  • Tatsumi, T., & Sala, G. (2023). Learning conversational dependency: Children’s response usingunin Japanese. Journal of Child Language, 50(5), 1226-1244. doi:10.1017/S0305000922000344.

    Abstract

    This study investigates how Japanese-speaking children learn interactional dependencies in conversations that determine the use of un, a token typically used as a positive response for yes-no questions, backchannel, and acknowledgement. We hypothesise that children learn to produce un appropriately by recognising different types of cues occurring in the immediately preceding turns. We built a set of generalised linear models on the longitudinal conversation data from seven children aged 1 to 5 years and their caregivers. Our models revealed that children not only increased their un production, but also learned to attend relevant cues in the preceding turns to understand when to respond by producing un. Children increasingly produced un when their interlocutors asked a yes-no question or signalled the continuation of their own speech. These results illustrate how children learn the probabilistic dependency between adjacent turns, and become able to participate in conversational interactions.
  • Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.

    Abstract

    From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
  • Ten Oever, S., Titone, L., te Rietmolen, N., & Martin, A. E. (2024). Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. Proceedings of the National Academy of Sciences of the United States of America, 121(3): e2320489121. doi:10.1073/pnas.2320489121.

    Abstract

    Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
  • Ter Keurs, M., Brown, C. M., & Hagoort, P. (2002). Lexical processing of vocabulary class in patients with Broca's aphasia: An event-related brain potential study on agrammatic comprehension. Neuropsychologia, 40(9), 1547-1561. doi:10.1016/S0028-3932(02)00025-8.

    Abstract

    This paper presents electrophysiological evidence of an impairment in the on-line processing of word class information in patients with Broca’s aphasia with agrammatic comprehension. Event-related brain potentials (ERPs) were recorded from the scalp while Broca patients and non-aphasic control subjects read open- and closed-class words that appeared one at a time on a PC screen. Separate waveforms were computed for open- and closed-class words. The non-aphasic control subjects showed a modulation of an early left anterior negativity in the 210–325 ms as a function of vocabulary class (VC), and a late left anterior negative shift to closed-class words in the 400–700 ms epoch. An N400 effect was present in both control subjects and Broca patients. We have taken the early electrophysiological differences to reflect the first availability of word-category information from the mental lexicon. The late differences can be related to post-lexical processing. In contrast to the control subjects, the Broca patients showed no early VC effect and no late anterior shift to closed-class words. The results support the view that an incomplete and/or delayed availability of word-class information might be an important factor in Broca’s agrammatic comprehension.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.

    Abstract

    Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity.
  • Terrill, A. (2002). Systems of nominal classification in East Papuan languages. Oceanic Linguistics, 41(1), 63-88.

    Abstract

    The existence of nominal classification systems has long been thought of as one of the defining features of the Papuan languages of island New Guinea. However, while almost all of these languages do have nominal classification systems, they are, in fact, extremely divergent from each other. This paper examines these systems in the East Papuan languages in order to examine the question of the relationship between these Papuan outliers. Nominal classification systems are often archaic, preserving older features lost elsewhere in a language. Also, evidence shows that they are not easily borrowed into languages (although they can be). For these reasons, it is useful to consider nominal classification systems as a tool for exploring ancient historical relationships between languages. This paper finds little evidence of relationship between the nominal classification systems of the East Papuan languages as a whole. It argues that the mere existence of nominal classification systems cannot be used as evidence that the East Papuan languages form a genetic family. The simplest hypothesis is that either the systems were inherited so long ago as to obscure the genetic evidence, or else the appearance of nominal classification systems in these languages arose through borrowing of grammatical systems rather than of morphological forms.
  • Terrill, A. (2002). Why make books for people who can't read? A perspective on documentation of an endangered language from Solomon Islands. International Journal of the Sociology of Language, 155/156(1), 205-219. doi:10.1515/ijsl.2002.029.

    Abstract

    This paper explores the issue of documenting an endangered language from the perspective of a community with low levels of literacy, I first discuss the background of the language community with whom I work, the Lavukal people of Solomon Islands, and discuss whether, and to what extent, Lavukaleve is an endangered language. I then go on to discuss the documentation project. My main point is that while low literacy levels and a nonreading culture would seem to make documentation a strange choice as a tool for language maintenance, in fact both serve as powerful cultural symbols of the importance and prestige of Lavukaleve. It is well known that a common reason for language death is that speakers choose not to transmit their language to the next generation (e.g. Winter 1993). Lavukaleve is particularly vulnerable in this respect. By utilizing cultural symbols of status and prestige, the standing of Lavukaleve can be enhanced, thus helping to ensure the transmission of Lavukaleve to future generations.
  • Terrill, A. (2002). [Review of the book The Interface between syntax and discourse in Korafe, a Papuan language of Papua New Guinea by Cynthia J. M. Farr]. Linguistic Typology, 6(1), 110-116. doi:10.1515/lity.2002.004.
  • Terrill, A. (2002). [Review of the book The Interface between syntax and discourse in Korafe, a Papuan language of Papua New Guinea by Cynthia J. M. Farr]. Linguistic Typology, 6(1), 110-116. doi:10.1515/lity.2002.004.
  • Tezcan, F., Weissbart, H., & Martin, A. E. (2023). A tradeoff between acoustic and linguistic feature encoding in spoken language comprehension. eLife, 12: e82386. doi:10.7554/eLife.82386.

    Abstract

    When we comprehend language from speech, the phase of the neural response aligns with particular features of the speech input, resulting in a phenomenon referred to as neural tracking. In recent years, a large body of work has demonstrated the tracking of the acoustic envelope and abstract linguistic units at the phoneme and word levels, and beyond. However, the degree to which speech tracking is driven by acoustic edges of the signal, or by internally-generated linguistic units, or by the interplay of both, remains contentious. In this study, we used naturalistic story-listening to investigate (1) whether phoneme-level features are tracked over and above acoustic edges, (2) whether word entropy, which can reflect sentence- and discourse-level constraints, impacted the encoding of acoustic and phoneme-level features, and (3) whether the tracking of acoustic edges was enhanced or suppressed during comprehension of a first language (Dutch) compared to a statistically familiar but uncomprehended language (French). We first show that encoding models with phoneme-level linguistic features, in addition to acoustic features, uncovered an increased neural tracking response; this signal was further amplified in a comprehended language, putatively reflecting the transformation of acoustic features into internally generated phoneme-level representations. Phonemes were tracked more strongly in a comprehended language, suggesting that language comprehension functions as a neural filter over acoustic edges of the speech signal as it transforms sensory signals into abstract linguistic units. We then show that word entropy enhances neural tracking of both acoustic and phonemic features when sentence- and discourse-context are less constraining. When language was not comprehended, acoustic features, but not phonemic ones, were more strongly modulated, but in contrast, when a native language is comprehended, phoneme features are more strongly modulated. Taken together, our findings highlight the flexible modulation of acoustic, and phonemic features by sentence and discourse-level constraint in language comprehension, and document the neural transformation from speech perception to language comprehension, consistent with an account of language processing as a neural filter from sensory to abstract representations.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2002). Going, going, gone: The acquisition of the verb ‘go’. Journal of Child Language, 29(4), 783-811. doi:10.1017/S030500090200538X.

    Abstract

    This study investigated different accounts of early argument structure acquisition and verb paradigm building through the detailed examination of the acquisition of the verb Go. Data from 11 children followed longitudinally between the ages of 2;0 and 3;0 were examined. Children's uses of the different forms of Go were compared with respect to syntactic structure and the semantics encoded. The data are compatible with the suggestion that the children were not operating with a single verb representation that differentiated between different forms of Go but rather that their knowledge of the relationship between the different forms of Go varied depending on the structure produced and the meaning encoded. However, a good predictor of the children's use of different forms of Go in particular structures and to express particular meanings was the frequency of use of those structures and meanings with particular forms of Go in the input. The implications of these findings for theories of syntactic category formation and abstract rule-based descriptions of grammar are discussed.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2005). The acquisition of auxiliary syntax: BE and HAVE. Cognitive Linguistics, 16(1), 247-277. doi:10.1515/cogl.2005.16.1.247.

    Abstract

    This study examined patterns of auxiliary provision and omission for the auxiliaries BE and HAVE in a longitudinal data set from 11 children between the ages of two and three years. Four possible explanations for auxiliary omission—a lack of lexical knowledge, performance limitations in production, the Optional Infinitive hypothesis, and patterns of auxiliary use in the input—were examined. The data suggest that although none of these accounts provides a full explanation for the pattern of auxiliary use and nonuse observed in children's early speech, integrating input-based and lexical learning-based accounts of early language acquisition within a constructivist approach appears to provide a possible framework in which to understand the patterns of auxiliary use found in the children's speech. The implications of these findings for models of children's early language acquisition are discussed.
  • Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.

    Abstract

    Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.

    Additional information

    supplementary information
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Tkalcec, A., Bierlein, M., Seeger‐Schneider, G., Walitza, S., Jenny, B., Menks, W. M., Felhbaum, L. V., Borbas, R., Cole, D. M., Raschle, N., Herbrecht, E., Stadler, C., & Cubillo, A. (2023). Empathy deficits, callous‐unemotional traits and structural underpinnings in autism spectrum disorder and conduct disorder youth. Autism Research, 16(10), 1946-1962. doi:10.1002/aur.2993.

    Abstract

    Distinct empathy deficits are often described in patients with conduct disorder (CD) and autism spectrum disorder (ASD) yet their neural underpinnings and the influence of comorbid Callous-Unemotional (CU) traits are unclear. This study compares the cognitive (CE) and affective empathy (AE) abilities of youth with CD and ASD, their potential neuroanatomical correlates, and the influence of CU traits on empathy. Adolescents and parents/caregivers completed empathy questionnaires (N = 148 adolescents, mean age = 15.16 years) and T1 weighted images were obtained from a subsample (N = 130). Group differences in empathy and the influence of CU traits were investigated using Bayesian analyses and Voxel-Based Morphometry with Threshold-Free Cluster Enhancement focusing on regions involved in AE (insula, amygdala, inferior frontal gyrus and cingulate cortex) and CE processes (ventromedial prefrontal cortex, temporoparietal junction, superior temporal gyrus, and precuneus). The ASD group showed lower parent-reported AE and CE scores and lower self-reported CE scores while the CD group showed lower parent-reported CE scores than controls. When accounting for the influence of CU traits no AE deficits in ASD and CE deficits in CD were found, but CE deficits in ASD remained. Across all participants, CU traits were negatively associated with gray matter volumes in anterior cingulate which extends into the mid cingulate, ventromedial prefrontal cortex, and precuneus. Thus, although co-occurring CU traits have been linked to global empathy deficits in reports and underlying brain structures, its influence on empathy aspects might be disorder-specific. Investigating the subdimensions of empathy may therefore help to identify disorder-specific empathy deficits.
  • Tomasek, M., Ravignani, A., Boucherie, P. H., Van Meyel, S., & Dufour, V. (2023). Spontaneous vocal coordination of vocalizations to water noise in rooks (Corvus frugilegus): An exploratory study. Ecology and Evolution, 13(2): e9791. doi:10.1002/ece3.9791.

    Abstract

    The ability to control one's vocal production is a major advantage in acoustic communication. Yet, not all species have the same level of control over their vocal output. Several bird species can interrupt their song upon hearing an external stimulus, but there is no evidence how flexible this behavior is. Most research on corvids focuses on their cognitive abilities, but few studies explore their vocal aptitudes. Recent research shows that crows can be experimentally trained to vocalize in response to a brief visual stimulus. Our study investigated vocal control abilities with a more ecologically embedded approach in rooks. We show that two rooks could spontaneously coordinate their vocalizations to a long-lasting stimulus (the sound of their small bathing pool being filled with a water hose), one of them adjusting roughly (in the second range) its vocalizations as the stimuli began and stopped. This exploratory study adds to the literature showing that corvids, a group of species capable of cognitive prowess, are indeed able to display good vocal control abilities.
  • Trilsbeek, P., & Wittenburg, P. (2005). Archiving challenges. In J. Gippert, N. Himmelmann, & U. Mosel (Eds.), Essentials of language documentation (pp. 311-335). Berlin: Mouton de Gruyter.
  • Trujillo, J. P., & Holler, J. (2023). Interactionally embedded gestalt principles of multimodal human communication. Perspectives on Psychological Science, 18(5), 1136-1159. doi:10.1177/17456916221141422.

    Abstract

    Natural human interaction requires us to produce and process many different signals, including speech, hand and head gestures, and facial expressions. These communicative signals, which occur in a variety of temporal relations with each other (e.g., parallel or temporally misaligned), must be rapidly processed as a coherent message by the receiver. In this contribution, we introduce the notion of interactionally embedded, affordance-driven gestalt perception as a framework that can explain how this rapid processing of multimodal signals is achieved as efficiently as it is. We discuss empirical evidence showing how basic principles of gestalt perception can explain some aspects of unimodal phenomena such as verbal language processing and visual scene perception but require additional features to explain multimodal human communication. We propose a framework in which high-level gestalt predictions are continuously updated by incoming sensory input, such as unfolding speech and visual signals. We outline the constituent processes that shape high-level gestalt perception and their role in perceiving relevance and prägnanz. Finally, we provide testable predictions that arise from this multimodal interactionally embedded gestalt-perception framework. This review and framework therefore provide a theoretically motivated account of how we may understand the highly complex, multimodal behaviors inherent in natural social interaction.
  • Trujillo, J. P., Dideriksen, C., Tylén, K., Christiansen, M. H., & Fusaroli, R. (2023). The dynamic interplay of kinetic and linguistic coordination in Danish and Norwegian conversation. Cognitive Science, 47(6): e13298. doi:10.1111/cogs.13298.

    Abstract

    In conversation, individuals work together to achieve communicative goals, complementing and aligning language and body with each other. An important emerging question is whether interlocutors entrain with one another equally across linguistic levels (e.g., lexical, syntactic, and semantic) and modalities (i.e., speech and gesture), or whether there are complementary patterns of behaviors, with some levels or modalities diverging and others converging in coordinated fashions. This study assesses how kinematic and linguistic entrainment interact with one another across levels of measurement, and according to communicative context. We analyzed data from two matched corpora of dyadic interaction between—respectively—Danish and Norwegian native speakers engaged in affiliative conversations and task-oriented conversations. We assessed linguistic entrainment at the lexical, syntactic, and semantic level, and kinetic alignment of the head and hands using video-based motion tracking and dynamic time warping. We tested whether—across the two languages—linguistic alignment correlates with kinetic alignment, and whether these kinetic-linguistic associations are modulated either by the type of conversation or by the language spoken. We found that kinetic entrainment was positively associated with low-level linguistic (i.e., lexical) entrainment, while negatively associated with high-level linguistic (i.e., semantic) entrainment, in a cross-linguistically robust way. Our findings suggest that conversation makes use of a dynamic coordination of similarity and complementarity both between individuals as well as between different communicative modalities, and provides evidence for a multimodal, interpersonal synergy account of interaction.
  • Trujillo, J. P. (2024). Motion-tracking technology for the study of gesture. In A. Cienki (Ed.), The Cambridge Handbook of Gesture Studies. Cambridge: Cambridge University Press.
  • Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.

    Abstract

    Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.

    Additional information

    41598_2024_52589_MOESM1_ESM.docx
  • Trujillo, J. P., & Holler, J. (2024). Information distribution patterns in naturalistic dialogue differ across languages. Psychonomic Bulletin & Review, 31, 1723-1734. doi:10.3758/s13423-024-02452-0.

    Abstract

    The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing–planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e.,time between speaker turns)

    Additional information

    Data availability
  • Trupp, M. D., Bignardi, G., Specker, E., Vessel, E. A., & Pelowski, M. (2023). Who benefits from online art viewing, and how: The role of pleasure, meaningfulness, and trait aesthetic responsiveness in computer-based art interventions for well-being. Computers in Human Behavior, 145: 107764. doi:10.1016/j.chb.2023.107764.

    Abstract

    When experienced in-person, engagement with art has been associated with positive outcomes in well-being and mental health. However, especially in the last decade, art viewing, cultural engagement, and even ‘trips’ to museums have begun to take place online, via computers, smartphones, tablets, or in virtual reality. Similarly, to what has been reported for in-person visits, online art engagements—easily accessible from personal devices—have also been associated to well-being impacts. However, a broader understanding of for whom and how online-delivered art might have well-being impacts is still lacking. In the present study, we used a Monet interactive art exhibition from Google Arts and Culture to deepen our understanding of the role of pleasure, meaning, and individual differences in the responsiveness to art. Beyond replicating the previous group-level effects, we confirmed our pre-registered hypothesis that trait-level inter-individual differences in aesthetic responsiveness predict some of the benefits that online art viewing has on well-being and further that such inter-individual differences at the trait level were mediated by subjective experiences of pleasure and especially meaningfulness felt during the online-art intervention. The role that participants' experiences play as a possible mechanism during art interventions is discussed in light of recent theoretical models.

    Additional information

    supplementary material
  • Ullman, M. T., Bulut, T., & Walenski, M. (2024). Hijacking limitations of working memory load to test for composition in language. Cognition, 251: 105875. doi:10.1016/j.cognition.2024.105875.

    Abstract

    Although language depends on storage and composition, just what is stored or (de)composed remains unclear. We leveraged working memory load limitations to test for composition, hypothesizing that decomposed forms should particularly tax working memory. We focused on a well-studied paradigm, English inflectional morphology. We predicted that (compositional) regulars should be harder to maintain in working memory than (non-compositional) irregulars, using a 3-back production task. Frequency, phonology, orthography, and other potentially confounding factors were controlled for. Compared to irregulars, regulars and their accompanying −s/−ing-affixed filler items yielded more errors. Underscoring the decomposition of only regulars, regulars yielded more bare-stem (e.g., walk) and stem affixation errors (walks/walking) than irregulars, whereas irregulars yielded more past-tense-form affixation errors (broughts/tolded). In line with previous evidence that regulars can be stored under certain conditions, the regular-irregular difference held specifically for phonologically consistent (not inconsistent) regulars, in particular for both low and high frequency consistent regulars in males, but only for low frequency consistent regulars in females. Sensitivity analyses suggested the findings were robust. The study further elucidates the computation of inflected forms, and introduces a simple diagnostic for linguistic composition.

    Additional information

    Data availabillity
  • Ünal, E., Mamus, E., & Özyürek, A. (2023). Multimodal encoding of motion events in speech, gesture, and cognition. Language and Cognition. Advance online publication. doi:10.1017/langcog.2023.61.

    Abstract

    How people communicate about motion events and how this is shaped by language typology are mostly studied with a focus on linguistic encoding in speech. Yet, human communication typically involves an interactional exchange of multimodal signals, such as hand gestures that have different affordances for representing event components. Here, we review recent empirical evidence on multimodal encoding of motion in speech and gesture to gain a deeper understanding of whether and how language typology shapes linguistic expressions in different modalities, and how this changes across different sensory modalities of input and interacts with other aspects of cognition. Empirical evidence strongly suggests that Talmy’s typology of event integration predicts multimodal event descriptions in speech and gesture and visual attention to event components prior to producing these descriptions. Furthermore, variability within the event itself, such as type and modality of stimuli, may override the influence of language typology, especially for expression of manner.
  • Uzbas, F., & O’Neill, A. (2023). Spatial Centrosome Proteomic Profiling of Human iPSC-derived Neural Cells. BIO-PROTOCOL, 13(17): e4812. doi:10.21769/BioProtoc.4812.

    Abstract

    The centrosome governs many pan-cellular processes including cell division, migration, and cilium formation.
    However, very little is known about its cell type-specific protein composition and the sub-organellar domains where
    these protein interactions take place. Here, we outline a protocol for the spatial interrogation of the centrosome
    proteome in human cells, such as those differentiated from induced pluripotent stem cells (iPSCs), through co-
    immunoprecipitation of protein complexes around selected baits that are known to reside at different structural parts
    of the centrosome, followed by mass spectrometry. The protocol describes expansion and differentiation of human
    iPSCs to dorsal forebrain neural progenitors and cortical projection neurons, harvesting and lysis of cells for protein
    isolation, co-immunoprecipitation with antibodies against selected bait proteins, preparation for mass spectrometry,
    processing the mass spectrometry output files using MaxQuant software, and statistical analysis using Perseus
    software to identify the enriched proteins by each bait. Given the large number of cells needed for the isolation of
    centrosome proteins, this protocol can be scaled up or down by modifying the number of bait proteins and can also
    be carried out in batches. It can potentially be adapted for other cell types, organelles, and species as well.
  • van der Burght, C. L., Numssen, O., Schlaak, B., Goucha, T., & Hartwigsen, G. (2023). Differential contributions of inferior frontal gyrus subregions to sentence processing guided by intonation. Human Brain Mapping, 44(2), 585-598. doi:10.1002/hbm.26086.

    Abstract

    Auditory sentence comprehension involves processing content (semantics), grammar (syntax), and intonation (prosody). The left inferior frontal gyrus (IFG) is involved in sentence comprehension guided by these different cues, with neuroimaging studies preferentially locating syntactic and semantic processing in separate IFG subregions. However, this regional specialisation and its functional relevance has yet to be confirmed. This study probed the role of the posterior IFG (pIFG) for syntactic processing and the anterior IFG (aIFG) for semantic processing with repetitive transcranial magnetic stimulation (rTMS) in a task that required the interpretation of the sentence’s prosodic realisation. Healthy participants performed a sentence completion task with syntactic and semantic decisions, while receiving 10 Hz rTMS over either left aIFG, pIFG, or vertex (control). Initial behavioural analyses showed an inhibitory effect on accuracy without task-specificity. However, electrical field simulations revealed differential effects for both subregions. In the aIFG, stronger stimulation led to slower semantic processing, with no effect of pIFG stimulation. In contrast, we found a facilitatory effect on syntactic processing in both aIFG and pIFG, where higher stimulation strength was related to faster responses. Our results provide first evidence for the functional relevance of left aIFG in semantic processing guided by intonation. The stimulation effect on syntactic responses emphasises the importance of the IFG for syntax processing, without supporting the hypothesis of a pIFG-specific involvement. Together, the results support the notion of functionally specialised IFG subregions for diverse but fundamental cues for language processing.

    Additional information

    supplementary information
  • Van Hoey, T., Thompson, A. L., Do, Y., & Dingemanse, M. (2023). Iconicity in ideophones: Guessing, memorizing, and reassessing. Cognitive Science, 47(4): e13268. doi:10.1111/cogs.13268.

    Abstract

    Iconicity, or the resemblance between form and meaning, is often ascribed to a special status and contrasted with default assumptions of arbitrariness in spoken language. But does iconicity in spoken language have a special status when it comes to learnability? A simple way to gauge learnability is to see how well something is retrieved from memory. We can further contrast this with guessability, to see (1) whether the ease of guessing the meanings of ideophones outperforms the rate at which they are remembered; and (2) how willing participants’ are to reassess what they were taught in a prior task—a novel contribution of this study. We replicate prior guessing and memory tasks using ideophones and adjectives from Japanese, Korean, and Igbo. Our results show that although native Cantonese speakers guessed ideophone meanings above chance level, they memorized both ideophones and adjectives with comparable accuracy. However, response time data show that participants took significantly longer to respond correctly to adjective–meaning pairs—indicating a discrepancy in a cognitive effort that favored the recognition of ideophones. In a follow-up reassessment task, participants who were taught foil translations were more likely to choose the true translations for ideophones rather than adjectives. By comparing the findings from our guessing and memory tasks, we conclude that iconicity is more accessible if a task requires participants to actively seek out sound-meaning associations.
  • Van Wonderen, E., & Nieuwland, M. S. (2023). Lexical prediction does not rationally adapt to prediction error: ERP evidence from pre-nominal articles. Journal of Memory and Language, 132: 104435. doi:10.1016/j.jml.2023.104435.

    Abstract

    People sometimes predict upcoming words during language comprehension, but debate remains on when and to what extent such predictions indeed occur. The rational adaptation hypothesis holds that predictions develop with expected utility: people predict more strongly when predictions are frequently confirmed (low prediction error) rather than disconfirmed. However, supporting evidence is mixed thus far and has only involved measuring responses to supposedly predicted nouns, not to preceding articles that may also be predicted. The current, large-sample (N = 200) ERP study on written discourse comprehension in Dutch therefore employs the well-known ‘pre-nominal prediction effect’: enhanced N400-like ERPs for articles that are unexpected given a likely upcoming noun’s gender (i.e., the neuter gender article ‘het’ when people expect the common gender noun phrase ‘de krant’, the newspaper) compared to expected articles. We investigated whether the pre-nominal prediction effect is larger when most of the presented stories contain predictable article-noun combinations (75% predictable, 25% unpredictable) compared to when most stories contain unpredictable combinations (25% predictable, 75% unpredictable). Our results show the pre-nominal prediction effect in both contexts, with little evidence to suggest that this effect depended on the percentage of predictable combinations. Moreover, the little evidence suggesting such a dependence was primarily observed for unexpected, neuter-gender articles (‘het’), which is inconsistent with the rational adaptation hypothesis. In line with recent demonstrations (Nieuwland, 2021a,b), our results suggest that linguistic prediction is less ‘rational’ or Bayes optimal than is often suggested.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activitity during speaking: From syntax to phonology in 40 milliseconds. Science, 280, 572-574.

    Abstract

    In normal conversation, speakers translate thoughts into words at high speed. To enable this speed, the retrieval of distinct types of linguistic knowledge has to be orchestrated with millisecond precision. The nature of this orchestration is still largely unknown. This report presents dynamic measures of the real-time activation of two basic types of linguistic knowledge, syntax and phonology. Electrophysiological data demonstrate that during noun-phrase production speakers retrieve the syntactic gender of a noun before its abstract phonological properties. This two-step process operates at high speed: the data show that phonological information is already available 40 milliseconds after syntactic properties have been retrieved.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activity during speaking: From syntax to phonology in 40 milliseconds. Science, 280(5363), 572-574. doi:10.1126/science.280.5363.572.
  • Van Wijk, C., & Kempen, G. (1982). De ontwikkeling van syntactische formuleervaardigheid bij kinderen van 9 tot 16 jaar. Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden, 37(8), 491-509.

    Abstract

    An essential phenomenon in the development towards syntactic maturity after early childhood is the increasing use of so-called sentence-combining transformations. Especially by using subordination, complex sentences are produced. The research reported here is an attempt to arrive at a more adequate characterization and explanation. Our starting point was an analysis of 280 texts written by Dutch-speaking pupils of the two highest grades of the primary school and the four lowest grades of three different types of secondary education. It was examined whether systematic shifts in the use of certain groups of so-called function words could be traced. We concluded that the development of the syntactic formulating ability can be characterized as an increase in connectivity: the use of all kinds of function words which explicitly mark logico-semantic relations between propositions. This development starts by inserting special adverbs and coordinating conjunctions resulting in various types of coordination. In a later stage, the syntactic patterning of the sentence is affected as well (various types of subordination). The increase in sentence complexity is only one aspect of the entire development. An explanation for the increase in connectivity is offered based upon a distinction between narrative and expository language use. The latter, but not the former, is characterized by frequent occurrence of connectives. The development in syntactic formulating ability includes a high level of skill in expository language use. Speed of development is determined by intensity of training, e.g. in scholastic and occupational settings.
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van Donselaar, W., Koster, M., & Cutler, A. (2005). Exploring the role of lexical stress in lexical recognition. Quarterly Journal of Experimental Psychology, 58A(2), 251-273. doi:10.1080/02724980343000927.

    Abstract

    Three cross-modal priming experiments examined the role of suprasegmental information in the processing of spoken words. All primes consisted of truncated spoken Dutch words. Recognition of visually presented word targets was facilitated by prior auditory presentation of the first two syllables of the same words as primes, but only if they were appropriately stressed (e.g., OKTOBER preceded by okTO-); inappropriate stress, compatible with another word (e.g., OKTOBER preceded by OCto-, the beginning of octopus), produced inhibition. Monosyllabic fragments (e.g., OC-) also produced facilitation when appropriately stressed; if inappropriately stressed, they produced neither facilitation nor inhibition. The bisyllabic fragments that were compatible with only one word produced facilitation to semantically associated words, but inappropriate stress caused no inhibition of associates. The results are explained within a model of spoken-word recognition involving competition between simultaneously activated phonological representations followed by activation of separate conceptual representations for strongly supported lexical candidates; at the level of the phonological representations, activation is modulated by both segmental and suprasegmental information.
  • Van Berkum, J. J. A., Brown, C. M., Zwitserlood, P., Kooijman, V., & Hagoort, P. (2005). Anticipating upcoming words in discourse: Evidence from ERPs and reading times. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(3), 443-467. doi:10.1037/0278-7393.31.3.443.

    Abstract

    The authors examined whether people can use their knowledge of the wider discourse rapidly enough to anticipate specific upcoming words as a sentence is unfolding. In an event-related brain potential (ERP) experiment, subjects heard Dutch stories that supported the prediction of a specific noun. To probe whether this noun was anticipated at a preceding indefinite article, stories were continued with a gender-marked adjective whose suffix mismatched the upcoming noun's syntactic gender. Prediction-inconsistent adjectives elicited a differential ERP effect, which disappeared in a no-discourse control experiment. Furthermore, in self-paced reading, prediction-inconsistent adjectives slowed readers down before the noun. These findings suggest that people can indeed predict upcoming words in fluent discourse and, moreover, that these predicted words can immediately begin to participate in incremental parsing operations.
  • Van Turennout, M. (2002). Het benoemen van een object veroorzaakt langdurige veranderingen in het brein. Neuropraxis, 6(3), 77-81.
  • Van Halteren, H., Baayen, R. H., Tweedie, F., Haverkort, M., & Neijt, A. (2005). New machine learning methods demonstrate the existence of a human stylome. Journal of Quantitative Linguistics, 12(1), 65-77. doi:10.1080/09296170500055350.

    Abstract

    Earlier research has shown that established authors can be distinguished by measuring specific properties of their writings, their stylome as it were. Here, we examine writings of less experienced authors. We succeed in distinguishing between these authors with a very high probability, which implies that a stylome exists even in the general population. However, the number of traits needed for so successful a distinction is an order of magnitude larger than assumed so far. Furthermore, traits referring to syntactic patterns prove less distinctive than traits referring to vocabulary, but much more distinctive than expected on the basis of current generativist theories of language learning.
  • Van Wijk, C., & Kempen, G. (1982). Kost zinsbouw echt tijd? In R. Stuip, & W. Zwanenberg (Eds.), Handelingen van het zevenendertigste Nederlands Filologencongres (pp. 223-231). Amsterdam: APA-Holland University Press.
  • Van Valin Jr., R. D. (1994). Extraction restrictions, competing theories and the argument from the poverty of the stimulus. In S. D. Lima, R. Corrigan, & G. K. Iverson (Eds.), The reality of linguistic rules (pp. 243-259). Amsterdam: Benjamins.
  • Van Geenhoven, V. (1998). On the Argument Structure of some Noun Incorporating Verbs in West Greenlandic. In M. Butt, & W. Geuder (Eds.), The Projection of Arguments - Lexical and Compositional Factors (pp. 225-263). Stanford, CA, USA: CSLI Publications.
  • van Geenhoven, V. (2002). Raised Possessors and Noun Incorporation in West Greenlandic. Natural Language & Linguistic Theory, 20(4), 759-821.

    Abstract

    This paper addresses the question of whether noun incorporation is a syntactically base-generated or a syntactically derived construction. Focusing on so-called 'raised possessors' in West Greenlandic noun incorporating constructions and presenting some new data, I discuss some problems that arise if we use the derivational framework of Bittner and Hale (1996) to analyze them. I show that if we make the predication relations in noun incorporating constructions overt in their syntax and if we adopt a dynamic approach to semantics, a base-generated syntactic input enriched with a coindexation system is all that we need to arrive at an adequate semantic interpretation of these constructions.
  • Van Valin Jr., R. D. (1998). The acquisition of WH-questions and the mechanisms of language acquisition. In M. Tomasello (Ed.), The new psychology of language: Cognitive and functional approaches to language structure (pp. 221-249). Mahwah, New Jersey: Erlbaum.
  • Van de Geer, J. P., Levelt, W. J. M., & Plomp, R. (1962). The connotation of musical consonance. Acta Psychologica, 20, 308-319.

    Abstract

    As a preliminary to further research on musical consonance an explanatory investigation was made on the different modes of judgment of musical intervals. This was done by way of a semantic differential. Subjects rated 23 intervals against 10 scales. In a factor analysis three factors appeared: pitch, evaluation and fusion. The relation between these factors and some physical characteristics has been investigated. The scale consonant-dissonant showed to be purely evaluative (in opposition to Stumpf's theory). This evaluative connotation is not in accordance with the musicological meaning of consonance. Suggestions to account for this difference have been given.
  • Van Wijk, C., & Kempen, G. (1982). Syntactische formuleervaardigheid en het schrijven van opstellen. Pedagogische Studiën, 59, 126-136.

    Abstract

    Meermalen is getracht om syntactische formuleenuuirdigheid direct en objectief te meten aan de hand van gesproken of geschreven teksten. Uitgangspunt hierbij vormde in de regel de syntactische complexiteit van de geproduceerde taaluitingen. Dit heeft echter niet geleid tot een plausibele, duidelijk omschreven en praktisch bruikbare index. N.a.v. een kritische bespreking van de notie complexiteit wordt in dit artikel als nieuw criterium voorgesteld de connectiviteit van de taaluitingen; de expliciete aanduiding van logiscli-scmantische relaties tussen proposities. Connectiviteit is gemakkelijk scoorbaar aan de hand van functiewoorden die verschillende vormen van nevenschikkend en onderschikkend zinsverband markeren. Deze nieuwe index ondetrangt de kritiek die op complexiteit gegeven kon worden, blijkt duidelijk te discrimineren tussen groepen leerlingen die van elkaar verschillen naar leeftijd en opleidingsniveau, en sluit aan bij recente taalpsychologische en sociolinguïstische theorie. Tot besluit worden enige onderwijskundige implicaties aangegeven.
  • Van Berkum, J. J. A., Hagoort, P., & Brown, C. M. (2000). The use of referential context and grammatical gender in parsing: A reply to Brysbaert and Mitchell. Journal of Psycholinguistic Research, 29(5), 467-481. doi:10.1023/A:1005168025226.

    Abstract

    Based on the results of an event-related brain potentials (ERP) experiment (van Berkum, Brown, & Hagoort. 1999a, b), we have recently argued that discourse-level referential context can be taken into account extremely rapidly by the parser. Moreover, our ERP results indicated that local grammatical gender information, although available within a few hundred milliseconds from word onset, is not always used quickly enough to prevent the parser from considering a discourse-supported, but agreement-violating, syntactic analysis. In a comment on our work, Brysbaert and Mitchell (2000) have raised concerns about the methodology of our ERP experiment and have challenged our interpretation of the results. In this reply, we argue that these concerns are unwarranted and, that, in contrast to our own interpretation, the alternative explanations provided by Brysbaert and Mitchell do not account for the full pattern of ERP results.
  • Van der Werf, O. J., Schuhmann, T., De Graaf, T., Ten Oever, S., & Sack, A. T. (2023). Investigating the role of task relevance during rhythmic sampling of spatial locations. Scientific Reports, 13: 12707. doi:10.1038/s41598-023-38968-z.

    Abstract

    Recently it has been discovered that visuospatial attention operates rhythmically, rather than being stably employed over time. A low-frequency 7–8 Hz rhythmic mechanism coordinates periodic windows to sample relevant locations and to shift towards other, less relevant locations in a visual scene. Rhythmic sampling theories would predict that when two locations are relevant 8 Hz sampling mechanisms split into two, effectively resulting in a 4 Hz sampling frequency at each location. Therefore, it is expected that rhythmic sampling is influenced by the relative importance of locations for the task at hand. To test this, we employed an orienting task with an arrow cue, where participants were asked to respond to a target presented in one visual field. The cue-to-target interval was systematically varied, allowing us to assess whether performance follows a rhythmic pattern across cue-to-target delays. We manipulated a location’s task relevance by altering the validity of the cue, thereby predicting the correct location in 60%, 80% or 100% of trials. Results revealed significant 4 Hz performance fluctuations at cued right visual field targets with low cue validity (60%), suggesting regular sampling of both locations. With high cue validity (80%), we observed a peak at 8 Hz towards non-cued targets, although not significant. These results were in line with our hypothesis suggesting a goal-directed balancing of attentional sampling (cued location) and shifting (non-cued location) depending on the relevance of locations in a visual scene. However, considering the hemifield specificity of the effect together with the absence of expected effects for cued trials in the high valid conditions we further discuss the interpretation of the data.

    Additional information

    supplementary information
  • Van Geert, E., Ding, R., & Wagemans, J. (2024). A cross-cultural comparison of aesthetic preferences for neatly organized compositions: Native Chinese- versus Native Dutch-speaking samples. Empirical Studies of the Arts. Advance online publication. doi:10.1177/02762374241245917.

    Abstract

    Do aesthetic preferences for images of neatly organized compositions (e.g., images collected on blogs like Things Organized Neatly©) generalize across cultures? In an earlier study, focusing on stimulus and personal properties related to order and complexity, Western participants indicated their preference for one of two simultaneously presented images (100 pairs). In the current study, we compared the data of the native Dutch-speaking participants from this earlier sample (N = 356) to newly collected data from a native Chinese-speaking sample (N = 220). Overall, aesthetic preferences were quite similar across cultures. When relating preferences for each sample to ratings of order, complexity, soothingness, and fascination collected from a Western, mainly Dutch-speaking sample, the results hint at a cross-culturally consistent preference for images that Western participants rate as more ordered, but a cross-culturally diverse relation between preferences and complexity.
  • van der Burght, C. L., Friederici, A. D., Maran, M., Papitto, G., Pyatigorskaya, E., Schroen, J., Trettenbrein, P., & Zaccarella, E. (2023). Cleaning up the brickyard: How theory and methodology shape experiments in cognitive neuroscience of language. Journal of Cognitive Neuroscience, 35(12), 2067-2088. doi:10.1162/jocn_a_02058.

    Abstract

    The capacity for language is a defining property of our species, yet despite decades of research evidence on its neural basis is still mixed and a generalized consensus is difficult to achieve. We suggest that this is partly caused by researchers defining “language” in different ways, with focus on a wide range of phenomena, properties, and levels of investigation. Accordingly, there is very little agreement amongst cognitive neuroscientists of language on the operationalization of fundamental concepts to be investigated in neuroscientific experiments. Here, we review chains of derivation in the cognitive neuroscience of language, focusing on how the hypothesis under consideration is defined by a combination of theoretical and methodological assumptions. We first attempt to disentangle the complex relationship between linguistics, psychology, and neuroscience in the field. Next, we focus on how conclusions that can be drawn from any experiment are inherently constrained by auxiliary assumptions, both theoretical and methodological, on which the validity of conclusions drawn rests. These issues are discussed in the context of classical experimental manipulations as well as study designs that employ novel approaches such as naturalistic stimuli and computational modelling. We conclude by proposing that a highly interdisciplinary field such as the cognitive neuroscience of language requires researchers to form explicit statements concerning the theoretical definitions, methodological choices, and other constraining factors involved in their work.
  • Van der Werff, J., Ravignani, A., & Jadoul, Y. (2024). thebeat: A Python package for working with rhythms and other temporal sequences. Behavior Research Methods, 56, 3725-3736. doi:10.3758/s13428-023-02334-8.

    Abstract

    thebeat is a Python package for working with temporal sequences and rhythms in the behavioral and cognitive sciences, as well as in bioacoustics. It provides functionality for creating experimental stimuli, and for visualizing and analyzing temporal data. Sequences, sounds, and experimental trials can be generated using single lines of code. thebeat contains functions for calculating common rhythmic measures, such as interval ratios, and for producing plots, such as circular histograms. thebeat saves researchers time when creating experiments, and provides the first steps in collecting widely accepted methods for use in timing research. thebeat is an open-source, on-going, and collaborative project, and can be extended for use in specialized subfields. thebeat integrates easily with the existing Python ecosystem, allowing one to combine our tested code with custom-made scripts. The package was specifically designed to be useful for both skilled and novice programmers. thebeat provides a foundation for working with temporal sequences onto which additional functionality can be built. This combination of specificity and plasticity should facilitate research in multiple research contexts and fields of study.
  • Verdonschot, R. G., Van der Wal, J., Lewis, A. G., Knudsen, B., Von Grebmer zu Wolfsthurn, S., Schiller, N. O., & Hagoort, P. (2024). Information structure in Makhuwa: Electrophysiological evidence for a universal processing account. Proceedings of the National Academy of Sciences of the United States of America, 121(30): e2315438121. doi:10.1073/pnas.2315438121.

    Abstract

    There is evidence from both behavior and brain activity that the way information is structured, through the use of focus, can up-regulate processing of focused constituents, likely to give prominence to the relevant aspects of the input. This is hypothesized to be universal, regardless of the different ways in which languages encode focus. In order to test this universalist hypothesis, we need to go beyond the more familiar linguistic strategies for marking focus, such as by means of intonation or specific syntactic structures (e.g., it-clefts). Therefore, in this study, we examine Makhuwa-Enahara, a Bantu language spoken in northern Mozambique, which uniquely marks focus through verbal conjugation. The participants were presented with sentences that consisted of either a semantically anomalous constituent or a semantically nonanomalous constituent. Moreover, focus on this particular constituent could be either present or absent. We observed a consistent pattern: Focused information generated a more negative N400 response than the same information in nonfocus position. This demonstrates that regardless of how focus is marked, its consequence seems to result in an upregulation of processing of information that is in focus.

    Additional information

    supplementary materials
  • Verga, L., D’Este, G., Cassani, S., Leitner, C., Kotz, S. A., Ferini-Strambi, L., & Galbiati, A. (2023). Sleeping with time in mind? A literature review and a proposal for a screening questionnaire on self-awakening. PLoS One, 18(3): e0283221. doi:10.1371/journal.pone.0283221.

    Abstract

    Some people report being able to spontaneously “time” the end of their sleep. This ability to self-awaken challenges the idea of sleep as a passive cognitive state. Yet, current evidence on this phenomenon is limited, partly because of the varied definitions of self-awakening and experimental approaches used to study it. Here, we provide a review of the literature on self-awakening. Our aim is to i) contextualise the phenomenon, ii) propose an operating definition, and iii) summarise the scientific approaches used so far. The literature review identified 17 studies on self-awakening. Most of them adopted an objective sleep evaluation (76%), targeted nocturnal sleep (76%), and used a single criterion to define the success of awakening (82%); for most studies, this corresponded to awakening occurring in a time window of 30 minutes around the expected awakening time. Out of 715 total participants, 125 (17%) reported to be self-awakeners, with an average age of 23.24 years and a slight predominance of males compared to females. These results reveal self-awakening as a relatively rare phenomenon. To facilitate the study of self-awakening, and based on the results of the literature review, we propose a quick paper-and-pencil screening questionnaire for self-awakeners and provide an initial validation for it. Taken together, the combined results of the literature review and the proposed questionnaire help in characterising a theoretical framework for self-awakenings, while providing a useful tool and empirical suggestions for future experimental studies, which should ideally employ objective measurements.
  • Verga, L., Kotz, S. A., & Ravignani, A. (2023). The evolution of social timing. Physics of Life Reviews, 46, 131-151. doi:10.1016/j.plrev.2023.06.006.

    Abstract

    Sociality and timing are tightly interrelated in human interaction as seen in turn-taking or synchronised dance movements. Sociality and timing also show in communicative acts of other species that might be pleasurable, but also necessary for survival. Sociality and timing often co-occur, but their shared phylogenetic trajectory is unknown: How, when, and why did they become so tightly linked? Answering these questions is complicated by several constraints; these include the use of divergent operational definitions across fields and species, the focus on diverse mechanistic explanations (e.g., physiological, neural, or cognitive), and the frequent adoption of anthropocentric theories and methodologies in comparative research. These limitations hinder the development of an integrative framework on the evolutionary trajectory of social timing and make comparative studies not as fruitful as they could be. Here, we outline a theoretical and empirical framework to test contrasting hypotheses on the evolution of social timing with species-appropriate paradigms and consistent definitions. To facilitate future research, we introduce an initial set of representative species and empirical hypotheses. The proposed framework aims at building and contrasting evolutionary trees of social timing toward and beyond the crucial branch represented by our own lineage. Given the integration of cross-species and quantitative approaches, this research line might lead to an integrated empirical-theoretical paradigm and, as a long-term goal, explain why humans are such socially coordinated animals.
  • Verga, L., Schwartze, M., & Kotz, S. A. (2023). Neurophysiology of language pathologies. In M. Grimaldi, E. Brattico, & Y. Shtyrov (Eds.), Language Electrified: Neuromethods (pp. 753-776). New York, NY: Springer US. doi:10.1007/978-1-0716-3263-5_24.

    Abstract

    Language- and speech-related disorders are among the most frequent consequences of developmental and acquired pathologies. While classical approaches to the study of these disorders typically employed the lesion method to unveil one-to-one correspondence between locations, the extent of the brain damage, and corresponding symptoms, recent advances advocate the use of online methods of investigation. For example, the use of electrophysiology or magnetoencephalography—especially when combined with anatomical measures—allows for in vivo tracking of real-time language and speech events, and thus represents a particularly promising venue for future research targeting rehabilitative interventions. In this chapter, we provide a comprehensive overview of language and speech pathologies arising from cortical and/or subcortical damage, and their corresponding neurophysiological and pathological symptoms. Building upon the reviewed evidence and literature, we aim at providing a description of how the neurophysiology of the language network changes as a result of brain damage. We will conclude by summarizing the evidence presented in this chapter, while suggesting directions for future research.
  • Verhagen, J. (2005). The role of the nonmodal auxiliary 'hebben' in Dutch as a second language. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 109-127.

    Abstract

    The acquisition of non-modal auxiliaries has been assumed to constitute an important step in the acquisition of finiteness in Germanic languages (cf. Jordens/Dimroth 2005, Jordens 2004, Becker 2005). This paper focuses onthe role of the auxiliary hebben (>to have<) in the acquisition of Dutch as a second language. More specifically, it investigates whether learners' production of hebben is related to their acquisition of two phenomena commonly associated with finiteness, i.e., topicalization and negation. Data are presented from 16 Turkish and 36 Moroccan learners of Dutch who participated in an experiment involving production and imitation tasks. The production data suggest that learners use topicalization and post-verbal negation only after they have learned to produce the auxiliary hebben. The results from the imitation task indicate, that learners are more sensitive to topicalization and post-verbal negation in sentences with hebben than in sentences with lexical verbs. Interestingly this holds also for learners that did not show productive command of hebben in the production tasks. Thus, in general, the results of the experiment provide support for the idea that non-modal auxiliaries are crucial in the acquisition of (certain properties of) finiteness.
  • Verhagen, J. (2005). The role of the nonmodal auxiliary 'hebben' in Dutch as a second language. Toegepaste Taalwetenschap in Artikelen, 73, 41-52.
  • Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O. Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O., Saffery, R., Bønnelykke, K., Reilly, S., Pennell, C. E., Wake, M., Cecil, C. A., Plomin, R., Fisher, S. E., & St Pourcain, B. (2024). Genome-wide analyses of vocabulary size in infancy and toddlerhood: Associations with Attention-Deficit/Hyperactivity Disorder and cognition-related traits. Biological Psychiatry, 95(1), 859-869. doi:10.1016/j.biopsych.2023.11.025.

    Abstract

    Background

    The number of words children produce (expressive vocabulary) and understand (receptive vocabulary) changes rapidly during early development, partially due to genetic factors. Here, we performed a meta–genome-wide association study of vocabulary acquisition and investigated polygenic overlap with literacy, cognition, developmental phenotypes, and neurodevelopmental conditions, including attention-deficit/hyperactivity disorder (ADHD).

    Methods

    We studied 37,913 parent-reported vocabulary size measures (English, Dutch, Danish) for 17,298 children of European descent. Meta-analyses were performed for early-phase expressive (infancy, 15–18 months), late-phase expressive (toddlerhood, 24–38 months), and late-phase receptive (toddlerhood, 24–38 months) vocabulary. Subsequently, we estimated single nucleotide polymorphism–based heritability (SNP-h2) and genetic correlations (rg) and modeled underlying factor structures with multivariate models.

    Results

    Early-life vocabulary size was modestly heritable (SNP-h2 = 0.08–0.24). Genetic overlap between infant expressive and toddler receptive vocabulary was negligible (rg = 0.07), although each measure was moderately related to toddler expressive vocabulary (rg = 0.69 and rg = 0.67, respectively), suggesting a multifactorial genetic architecture. Both infant and toddler expressive vocabulary were genetically linked to literacy (e.g., spelling: rg = 0.58 and rg = 0.79, respectively), underlining genetic similarity. However, a genetic association of early-life vocabulary with educational attainment and intelligence emerged only during toddlerhood (e.g., receptive vocabulary and intelligence: rg = 0.36). Increased ADHD risk was genetically associated with larger infant expressive vocabulary (rg = 0.23). Multivariate genetic models in the ALSPAC (Avon Longitudinal Study of Parents and Children) cohort confirmed this finding for ADHD symptoms (e.g., at age 13; rg = 0.54) but showed that the association effect reversed for toddler receptive vocabulary (rg = −0.74), highlighting developmental heterogeneity.

    Conclusions

    The genetic architecture of early-life vocabulary changes during development, shaping polygenic association patterns with later-life ADHD, literacy, and cognition-related traits.
  • Vessel, E. A., Pasqualette, L., Uran, C., Koldehoff, S., Bignardi, G., & Vinck, M. (2023). Self-relevance predicts the aesthetic appeal of real and synthetic artworks generated via neural style transfer. Psychological Science, 34(9), 1007-1023. doi:10.1177/09567976231188107.

    Abstract

    What determines the aesthetic appeal of artworks? Recent work suggests that aesthetic appeal can, to some extent, be predicted from a visual artwork’s image features. Yet a large fraction of variance in aesthetic ratings remains unexplained and may relate to individual preferences. We hypothesized that an artwork’s aesthetic appeal depends strongly on self-relevance. In a first study (N = 33 adults, online replication N = 208), rated aesthetic appeal for real artworks was positively predicted by rated self-relevance. In a second experiment (N = 45 online), we created synthetic, self-relevant artworks using deep neural networks that transferred the style of existing artworks to photographs. Style transfer was applied to self-relevant photographs selected to reflect participant-specific attributes such as autobiographical memories. Self-relevant, synthetic artworks were rated as more aesthetically appealing than matched control images, at a level similar to human-made artworks. Thus, self-relevance is a key determinant of aesthetic appeal, independent of artistic skill and image features.

    Additional information

    supplementary materials
  • Vigliocco, G., Lauer, M., Damian, M. F., & Levelt, W. J. M. (2002). Semantic and syntactic forces in noun phrase production. Journal of Experimental Psychology: Learning, Memory and Cognition, 28(1), 46-58. doi:10.1037//0278-7393.28.1.46.

    Abstract

    Three experiments investigated semantic and syntactic effects in the production of phrases in Dutch. Bilingual participants were presented with English nouns and were asked to produce an adjective + noun phrase in Dutch including the translation of the noun. In 2 experiments, the authors blocked items by either semantic category or grammatical gender. Participants performed the task slower when the target nouns were of the same semantic category than when they were from different categories and faster when the target nouns had the same gender than when they had different genders. In a final experiment, both manipulations were crossed. The authors replicated the results of the first 2 experiments, and no interaction was found. These findings suggest a feedforward flow of activation between lexico-semantic and lexico-syntactic information.
  • Vigliocco, G., Vinson, D. P., Damian, M. F., & Levelt, W. J. M. (2002). Semantic distance effects on object and action naming. Cognition, 85, B61-B69. doi:10.1016/S0010-0277(02)00107-5.

    Abstract

    Graded interference effects were tested in a naming task, in parallel for objects and actions. Participants named either object or action pictures presented in the context of other pictures (blocks) that were either semantically very similar, or somewhat semantically similar or semantically dissimilar. We found that naming latencies for both object and action words were modulated by the semantic similarity between the exemplars in each block, providing evidence in both domains of graded semantic effects.
  • Vingerhoets, G., Verhelst, H., Gerrits, R., Badcock, N., Bishop, D. V. M., Carey, D., Flindall, J., Grimshaw, G., Harris, L. J., Hausmann, M., Hirnstein, M., Jäncke, L., Joliot, M., Specht, K., Westerhausen, R., & LICI consortium (2023). Laterality indices consensus initiative (LICI): A Delphi expert survey report on recommendations to record, assess, and report asymmetry in human behavioural and brain research. Laterality, 28(2-3), 122-191. doi:10.1080/1357650X.2023.2199963.

    Abstract

    Laterality indices (LIs) quantify the left-right asymmetry of brain and behavioural variables and provide a measure that is statistically convenient and seemingly easy to interpret. Substantial variability in how structural and functional asymmetries are recorded, calculated, and reported, however, suggest little agreement on the conditions required for its valid assessment. The present study aimed for consensus on general aspects in this context of laterality research, and more specifically within a particular method or technique (i.e., dichotic listening, visual half-field technique, performance asymmetries, preference bias reports, electrophysiological recording, functional MRI, structural MRI, and functional transcranial Doppler sonography). Experts in laterality research were invited to participate in an online Delphi survey to evaluate consensus and stimulate discussion. In Round 0, 106 experts generated 453 statements on what they considered good practice in their field of expertise. Statements were organised into a 295-statement survey that the experts then were asked, in Round 1, to independently assess for importance and support, which further reduced the survey to 241 statements that were presented again to the experts in Round 2. Based on the Round 2 input, we present a set of critically reviewed key recommendations to record, assess, and report laterality research for various methods.

    Files private

    Request files
  • Vonk, W. (2002). Zin in tekst. Psycholinguïstisch onderzoek naar het begrijpen van taal. Gramma/TTT, 8, 267-284.
  • Vosse, T., & Kempen, G. (2000). Syntactic structure assembly in human parsing: A computational model based on competitive inhibition and a lexicalist grammar. Cognition, 75, 105-143.

    Abstract

    We present the design, implementation and simulation results of a psycholinguistic model of human syntactic processing that meets major empirical criteria. The parser operates in conjunction with a lexicalist grammar and is driven by syntactic information associated with heads of phrases. The dynamics of the model are based on competition by lateral inhibition ('competitive inhibition'). Input words activate lexical frames (i.e. elementary trees anchored to input words) in the mental lexicon, and a network of candidate 'unification links' is set up between frame nodes. These links represent tentative attachments that are graded rather than all-or-none. Candidate links that, due to grammatical or 'treehood' constraints, are incompatible, compete for inclusion in the final syntactic tree by sending each other inhibitory signals that reduce the competitor's attachment strength. The outcome of these local and simultaneous competitions is controlled by dynamic parameters, in particular by the Entry Activation and the Activation Decay rate of syntactic nodes, and by the Strength and Strength Build-up rate of Unification links. In case of a successful parse, a single syntactic tree is returned that covers the whole input string and consists of lexical frames connected by winning Unification links. Simulations are reported of a significant range of psycholinguistic parsing phenomena in both normal and aphasic speakers of English: (i) various effects of linguistic complexity (single versus double, center versus right-hand self-embeddings of relative clauses; the difference between relative clauses with subject and object extraction; the contrast between a complement clause embedded within a relative clause versus a relative clause embedded within a complement clause); (ii) effects of local and global ambiguity, and of word-class and syntactic ambiguity (including recency and length effects); (iii) certain difficulty-of-reanalysis effects (contrasts between local ambiguities that are easy to resolve versus ones that lead to serious garden-path effects); (iv) effects of agrammatism on parsing performance, in particular the performance of various groups of aphasic patients on several sentence types.

Share this page