Publications

Displaying 301 - 400 of 422
  • Rossi, G. (2009). Il discorso scritto interattivo degli SMS: Uno studio pragmatico del "messaggiare". Rivista Italiana di Dialettologia, 33, 143-193. doi:10.1400/148734.
  • Rowland, C. F., & Theakston, A. L. (2009). The acquisition of auxiliary syntax: A longitudinal elicitation study. Part 2: The modals and auxiliary DO. Journal of Speech, Language, and Hearing Research, 52, 1471-1492. doi:10.1044/1092-4388(2009/08-0037a).

    Abstract

    Purpose: The study of auxiliary acquisition is central to work on language development and has attracted theoretical work from both nativist and constructivist approaches. This study is part of a 2-part companion set that represents a unique attempt to trace the development of auxiliary syntax by using a longitudinal elicitation methodology. The aim of the research described in this part is to track the development of modal auxiliaries and auxiliary DO in questions and declaratives to provide a more complete picture of the development of the auxiliary system in English-speaking children. Method: Twelve English-speaking children participated in 2 tasks designed to elicit auxiliaries CAN, WILL, and DOES in declaratives and yes/no questions. They completed each task 6 times in total between the ages of 2;10 (years;months) and 3;6. Results: The children’s levels of correct use of the target auxiliaries differed in complex ways according to auxiliary, polarity, and sentence structure, and these relations changed over development. An analysis of the children’s errors also revealed complex interactions between these factors. Conclusions: These data cannot be explained in full by existing theories of auxiliary acquisition. Researchers working within both generativist and constructivist frameworks need to develop more detailed theories of acquisition that predict the pattern of acquisition observed.
  • Rubianes, M., Drijvers, L., Muñoz, F., Jiménez-Ortega, L., Almeida-Rivera, T., Sánchez-García, J., Fondevila, S., Casado, P., & Martín-Loeches, M. (2024). The self-reference effect can modulate language syntactic processing even without explicit awareness: An electroencephalography study. Journal of Cognitive Neuroscience, 36(3), 460-474. doi:10.1162/jocn_a_02104.

    Abstract

    Although it is well established that self-related information can rapidly capture our attention and bias cognitive functioning, whether this self-bias can affect language processing remains largely unknown. In addition, there is an ongoing debate as to the functional independence of language processes, notably regarding the syntactic domain. Hence, this study investigated the influence of self-related content on syntactic speech processing. Participants listened to sentences that could contain morphosyntactic anomalies while the masked face identity (self, friend, or unknown faces) was presented for 16 msec preceding the critical word. The language-related ERP components (left anterior negativity [LAN] and P600) appeared for all identity conditions. However, the largest LAN effect followed by a reduced P600 effect was observed for self-faces, whereas a larger LAN with no reduction of the P600 was found for friend faces compared with unknown faces. These data suggest that both early and late syntactic processes can be modulated by self-related content. In addition, alpha power was more suppressed over the left inferior frontal gyrus only when self-faces appeared before the critical word. This may reflect higher semantic demands concomitant to early syntactic operations (around 150–550 msec). Our data also provide further evidence of self-specific response, as reflected by the N250 component. Collectively, our results suggest that identity-related information is rapidly decoded from facial stimuli and may impact core linguistic processes, supporting an interactive view of syntactic processing. This study provides evidence that the self-reference effect can be extended to syntactic processing.
  • Rubio-Fernández, P. (2024). Cultural evolutionary pragmatics: Investigating the codevelopment and coevolution of language and social cognition. Psychological Review, 131(1), 18-35. doi:10.1037/rev0000423.

    Abstract

    Language and social cognition come together in communication, but their relation has been intensely contested. Here, I argue that these two distinctively human abilities are connected in a positive feedback loop, whereby the development of one cognitive skill boosts the development of the other. More specifically, I hypothesize that language and social cognition codevelop in ontogeny and coevolve in diachrony through the acquisition, mature use, and cultural evolution of reference systems (e.g., demonstratives: “this” vs. “that”; articles: “a” vs. “the”; pronouns: “I” vs. “you”). I propose to study the connection between reference systems and communicative social cognition across three parallel timescales—language acquisition, language use, and language change, as a new research program for cultural evolutionary pragmatics. Within that framework, I discuss the coevolution of language and communicative social cognition as cognitive gadgets, and introduce a new methodological approach to study how universals and cross-linguistic differences in reference systems may result in different developmental pathways to human social cognition.
  • De Ruiter, L. E. (2009). The prosodic marking of topical referents in the German "Vorfeld" by children and adults. The Linguistic Review, 26, 329-354. doi:10.1515/tlir.2009.012.

    Abstract

    This article reports on the analysis of prosodic marking of topical referents in the German prefield by 5- and 7-year-old children and adults. Natural speech data was obtained from a picture-elicited narration task. The data was analyzed both phonologically and phonetically. In line with previous findings, adult speakers realized topical referents predominantly with the accents L+H* and L*+H, but H* accents and unaccented items were also observed. Children used the same accent types as adults, but the accent types were distributed differently. Also, children aligned pitch minima earlier than adults and produced accents with a decreased speed of pitch change. Possible reasons for these findings are discussed. Contrast – defined in terms of a change of subjecthood – did not affect the choice of pitch accent type and did not influence phonetic realization, underlining the fact that accentuation is often a matter of individual speaker choice.

    Files private

    Request files
  • Scheeringa, R., Petersson, K. M., Oostenveld, R., Norris, D. G., Hagoort, P., & Bastiaansen, M. C. M. (2009). Trial-by-trial coupling between EEG and BOLD identifies networks related to alpha and theta EEG power increases during working memory maintenance. Neuroimage, 44, 1224-1238. doi:10.1016/j.neuroimage.2008.08.041.

    Abstract

    PET and fMRI experiments have previously shown that several brain regions in the frontal and parietal lobe are involved in working memory maintenance. MEG and EEG experiments have shown parametric increases with load for oscillatory activity in posterior alpha and frontal theta power. In the current study we investigated whether the areas found with fMRI can be associated with these alpha and theta effects by measuring simultaneous EEG and fMRI during a modified Sternberg task This allowed us to correlate EEG at the single trial level with the fMRI BOLD signal by forming a regressor based on single trial alpha and theta
    power estimates. We observed a right posterior, parametric alpha power increase, which was functionally related to decreases in BOLD in the primary visual cortex and in the posterior part of the right middle temporal gyrus. We relate this finding to the inhibition of neuronal activity that may interfere with WM maintenance. An observed parametric increase in frontal theta power was correlated to a decrease in BOLD in
    regions that together form the default mode network. We did not observe correlations between oscillatory EEG phenomena and BOLD in the traditional WM areas. In conclusion, the study shows that simultaneous EEG fMRI recordings can be successfully used to identify the emergence of functional networks in the brain during the execution of a cognitive task.
  • Schijven, D., Soheili-Nezhad, S., Fisher, S. E., & Francks, C. (2024). Exome-wide analysis implicates rare protein-altering variants in human handedness. Nature Communications, 15: 2632. doi:10.1038/s41467-024-46277-w.

    Abstract

    Handedness is a manifestation of brain hemispheric specialization. Left-handedness occurs at increased rates in neurodevelopmental disorders. Genome-wide association studies have identified common genetic effects on handedness or brain asymmetry, which mostly involve variants outside protein-coding regions and may affect gene expression. Implicated genes include several that encode tubulins (microtubule components) or microtubule-associated proteins. Here we examine whether left-handedness is also influenced by rare coding variants (frequencies ≤ 1%), using exome data from 38,043 left-handed and 313,271 right-handed individuals from the UK Biobank. The beta-tubulin gene TUBB4B shows exome-wide significant association, with a rate of rare coding variants 2.7 times higher in left-handers than right-handers. The TUBB4B variants are mostly heterozygous missense changes, but include two frameshifts found only in left-handers. Other TUBB4B variants have been linked to sensorineural and/or ciliopathic disorders, but not the variants found here. Among genes previously implicated in autism or schizophrenia by exome screening, DSCAM and FOXP1 show evidence for rare coding variant association with left-handedness. The exome-wide heritability of left-handedness due to rare coding variants was 0.91%. This study reveals a role for rare, protein-altering variants in left-handedness, providing further evidence for the involvement of microtubules and disorder-relevant genes.
  • Schiller, N., Horemans, I., Ganushchak, L. Y., & Koester, D. (2009). Event-related brain potentials during monitoring of speech errors. NeuroImage, 44, 520-530. doi:10.1016/j.neuroimage.2008.09.019.

    Abstract

    When we perceive speech, our goal is to extract the meaning of the verbal message which includes semantic processing. However, how deeply do we process speech in different situations? In two experiments, native Dutch participants heard spoken sentences describing simultaneously presented pictures. Sentences either correctly described the pictures or contained an anomalous final word (i.e. a semantically or phonologically incongruent word). In the first experiment, spoken sentences were task-irrelevant and both anomalous conditions elicited similar centro-parietal N400s that were larger in amplitude than the N400 for the correct condition. In the second experiment, we ensured that participants processed the same stimuli semantically. In an early time window, we found similar phonological mismatch negativities for both anomalous conditions compared to the correct condition. These negativities were followed by an N400 that was larger for semantic than phonological errors. Together, these data suggest that we process speech semantically, even if the speech is task-irrelevant. Once listeners allocate more cognitive resources to the processing of speech, we suggest that they make predictions for upcoming words, presumably by means of the production system and an internal monitoring loop, to facilitate lexical processing of the perceived speech
  • Schoffelen, J.-M., & Gross, J. (2009). Source connectivity analysis with MEG and EEG. Human Brain Mapping, 30, 1857-1865. doi: 10.1002/hbm.20745.

    Abstract

    Interactions between functionally specialized brain regions are crucial for normal brain function. Magnetoencephalography (MEG) and electroencephalography (EEG) are techniques suited to capture these interactions, because they provide whole head measurements of brain activity in the millisecond range. More than one sensor picks up the activity of an underlying source. This field spread severely limits the utility of connectivity measures computed directly between sensor recordings. Consequentially, neuronal interactions should be studied on the level of the reconstructed sources. This article reviews several methods that have been applied to investigate interactions between brain regions in source space. We will mainly focus on the different measures used to quantify connectivity, and on the different strategies adopted to identify regions of interest. Despite various successful accounts of MEG and EEG source connectivity, caution with respect to the interpretation of the results is still warranted. This is due to the fact that effects of field spread can never be completely abolished in source space. However, in this very exciting and developing field of research this cautionary note should not discourage researchers from further investigation into the connectivity between neuronal sources.
  • Schuppler, B., van Doremalen, J., Scharenborg, O., Cranen, B., & Boves, L. (2009). Using temporal information for improving articulatory-acoustic feature classification. Automatic Speech Recognition and Understanding, IEEE 2009 Workshop, 70-75. doi:10.1109/ASRU.2009.5373314.

    Abstract

    This paper combines acoustic features with a high temporal and a high frequency resolution to reliably classify articulatory events of short duration, such as bursts in plosives. SVM classification experiments on TIMIT and SVArticulatory showed that articulatory-acoustic features (AFs) based on a combination of MFCCs derived from a long window of 25ms and a short window of 5ms that are both shifted with 2.5ms steps (Both) outperform standard MFCCs derived with a window of 25 ms and a shift of 10 ms (Baseline). Finally, comparison of the TIMIT and SVArticulatory results showed that for classifiers trained on data that allows for asynchronously changing AFs (SVArticulatory) the improvement from Baseline to Both is larger than for classifiers trained on data where AFs change simultaneously with the phone boundaries (TIMIT).
  • Scott, S. K., McGettigan, C., & Eisner, F. (2009). A little more conversation, a little less action: Candidate roles for motor cortex in speech perception. Nature Reviews Neuroscience, 10(4), 295-302. doi:10.1038/nrn2603.

    Abstract

    The motor theory of speech perception assumes that activation of the motor system is essential in the perception of speech. However, deficits in speech perception and comprehension do not arise from damage that is restricted to the motor cortex, few functional imaging studies reveal activity in motor cortex during speech perception, and the motor cortex is strongly activated by many different sound categories. Here, we evaluate alternative roles for the motor cortex in spoken communication and suggest a specific role in sensorimotor processing in conversation. We argue that motor-cortex activation it is essential in joint speech, particularly for the timing of turn-taking.
  • Scott, L. J., Muglia, P., Kong, X. Q., Guan, W., Flickinger, M., Upmanyu, R., Tozzi, F., Li, J. Z., Burmeister, M., Absher, D., Thompson, R. C., Francks, C., Meng, F., Antoniades, A., Southwick, A. M., Schatzberg, A. F., Bunney, W. E., Barchas, J. D., Jones, E. G., Day, R. and 13 moreScott, L. J., Muglia, P., Kong, X. Q., Guan, W., Flickinger, M., Upmanyu, R., Tozzi, F., Li, J. Z., Burmeister, M., Absher, D., Thompson, R. C., Francks, C., Meng, F., Antoniades, A., Southwick, A. M., Schatzberg, A. F., Bunney, W. E., Barchas, J. D., Jones, E. G., Day, R., Matthews, K., McGuffin, P., Strauss, J. S., Kennedy, J. L., Middleton, L., Roses, A. D., Watson, S. J., Vincent, J. B., Myers, R. M., Farmer, A. E., Akil, H., Burns, D. K., & Boehnke, M. (2009). Genome-wide association and meta-analysis of bipolar disorder in individuals of European ancestry. Proceedings of the National Academy of Sciences of the United States of America, 106(18), 7501-7506. doi:10.1073/pnas.0813386106.

    Abstract

    Bipolar disorder (BP) is a disabling and often life-threatening disorder that affects approximately 1% of the population worldwide. To identify genetic variants that increase the risk of BP, we genotyped on the Illumina HumanHap550 Beadchip 2,076 bipolar cases and 1,676 controls of European ancestry from the National Institute of Mental Health Human Genetics Initiative Repository, and the Prechter Repository and samples collected in London, Toronto, and Dundee. We imputed SNP genotypes and tested for SNP-BP association in each sample and then performed meta-analysis across samples. The strongest association P value for this 2-study meta-analysis was 2.4 x 10(-6). We next imputed SNP genotypes and tested for SNP-BP association based on the publicly available Affymetrix 500K genotype data from the Wellcome Trust Case Control Consortium for 1,868 BP cases and a reference set of 12,831 individuals. A 3-study meta-analysis of 3,683 nonoverlapping cases and 14,507 extended controls on >2.3 M genotyped and imputed SNPs resulted in 3 chromosomal regions with association P approximately 10(-7): 1p31.1 (no known genes), 3p21 (>25 known genes), and 5q15 (MCTP1). The most strongly associated nonsynonymous SNP rs1042779 (OR = 1.19, P = 1.8 x 10(-7)) is in the ITIH1 gene on chromosome 3, with other strongly associated nonsynonymous SNPs in GNL3, NEK4, and ITIH3. Thus, these chromosomal regions harbor genes implicated in cell cycle, neurogenesis, neuroplasticity, and neurosignaling. In addition, we replicated the reported ANK3 association results for SNP rs10994336 in the nonoverlapping GSK sample (OR = 1.37, P = 0.042). Although these results are promising, analysis of additional samples will be required to confirm that variant(s) in these regions influence BP risk.

    Additional information

    Supp_Inform_Scott_et_al.pdf
  • Segaert, K., Nygård, G. E., & Wagemans, J. (2009). Identification of everyday objects on the basis of kinetic contours. Vision Research, 49(4), 417-428. doi:10.1016/j.visres.2008.11.012.

    Abstract

    Using kinetic contours derived from everyday objects, we investigated how motion affects object identification. In order not to be distinguishable when static, kinetic contours were made from random dot displays consisting of two regions, inside and outside the object contour. In Experiment 1, the dots were moving in only one of two regions. The objects were identified nearly equally well as soon as the dots either in the figure or in the background started to move. RTs decreased with increasing motion coherence levels and were shorter for complex, less compact objects than for simple, more compact objects. In Experiment 2, objects could be identified when the dots were moving both in the figure and in the background with speed and direction differences between the two. A linear increase in either the speed difference or the direction difference caused a linear decrease in RT for correct identification. In addition, the combination of speed and motion differences appeared to be super-additive.
  • Seidl, A., Cristia, A., Bernard, A., & Onishi, K. H. (2009). Allophonic and phonemic contrasts in infants' learning of sound patterns. Language Learning and Development, 5, 191-202. doi:10.1080/15475440902754326.

    Abstract

    French-learning 11-month-old and English-learning 11- and 4-month-old infants were familiarized with consonant–vowel–consonant syllables in which the final consonants were dependent on whether the preceding vowel was oral or nasal. Oral and nasal vowels are present in the ambient language of all participants, but vowel nasality is phonemic (contrastive) in French and allophonic (noncontrastive) in English. After familiarization, infants heard novel syllables that either followed or violated the familiarized patterns. French-learning 11-month-olds and English-learning 4-month-olds displayed a reliable pattern of preference demonstrating learning and generalization of the patterns, while English-learning 11-month-olds oriented equally to syllables following and violating the familiarized patterns. The results are consistent with an experience-driven reduction of attention to allophonic contrasts by as early as 11 months, which influences phonotactic learning.
  • Seijdel, N., Schoffelen, J.-M., Hagoort, P., & Drijvers, L. (2024). Attention drives visual processing and audiovisual integration during multimodal communication. The Journal of Neuroscience, 44(10): e0870232023. doi:10.1523/JNEUROSCI.0870-23.2023.

    Abstract

    During communication in real-life settings, our brain often needs to integrate auditory and visual information, and at the same time actively focus on the relevant sources of information, while ignoring interference from irrelevant events. The interaction between integration and attention processes remains poorly understood. Here, we use rapid invisible frequency tagging (RIFT) and magnetoencephalography (MEG) to investigate how attention affects auditory and visual information processing and integration, during multimodal communication. We presented human participants (male and female) with videos of an actress uttering action verbs (auditory; tagged at 58 Hz) accompanied by two movie clips of hand gestures on both sides of fixation (attended stimulus tagged at 65 Hz; unattended stimulus tagged at 63 Hz). Integration difficulty was manipulated by a lower-order auditory factor (clear/degraded speech) and a higher-order visual semantic factor (matching/mismatching gesture). We observed an enhanced neural response to the attended visual information during degraded speech compared to clear speech. For the unattended information, the neural response to mismatching gestures was enhanced compared to matching gestures. Furthermore, signal power at the intermodulation frequencies of the frequency tags, indexing non-linear signal interactions, was enhanced in left frontotemporal and frontal regions. Focusing on LIFG (Left Inferior Frontal Gyrus), this enhancement was specific for the attended information, for those trials that benefitted from integration with a matching gesture. Together, our results suggest that attention modulates audiovisual processing and interaction, depending on the congruence and quality of the sensory input.

    Additional information

    link to preprint
  • Sekine, K. (2009). Changes in frame of reference use across the preschool years: A longitudinal study of the gestures and speech produced during route descriptions. Language and Cognitive Processes, 24(2), 218-238. doi:10.1080/01690960801941327.

    Abstract

    This study longitudinally investigated developmental changes in the frame of reference used by children in their gestures and speech. Fifteen children, between 4 and 6 years of age, were asked once a year to describe their route home from their nursery school. When the children were 4 years old, they tended to produce gestures that directly and continuously indicated their actual route in a large gesture space. In contrast, as 6-year-olds, their gestures were segmented and did not match the actual route. Instead, at age 6, the children seemed to create a virtual space in front of themselves to symbolically describe their route. These results indicate that the use of frames of reference develops across the preschool years, shifting from an actual environmental to an abstract environmental frame of reference. Factors underlying the development of frame of reference, including verbal encoding skills and experience, are discussed.
  • Sekine, K., & Özyürek, A. (2024). Children benefit from gestures to understand degraded speech but to a lesser extent than adults. Frontiers in Psychology, 14: 1305562. doi:10.3389/fpsyg.2023.1305562.

    Abstract

    The present study investigated to what extent children, compared to adults, benefit from gestures to disambiguate degraded speech by manipulating speech signals and manual modality. Dutch-speaking adults (N = 20) and 6- and 7-year-old children (N = 15) were presented with a series of video clips in which an actor produced a Dutch action verb with or without an accompanying iconic gesture. Participants were then asked to repeat what they had heard. The speech signal was either clear or altered into 4- or 8-band noise-vocoded speech. Children had more difficulty than adults in disambiguating degraded speech in the speech-only condition. However, when presented with both speech and gestures, children reached a comparable level of accuracy to that of adults in the degraded-speech-only condition. Furthermore, for adults, the enhancement of gestures was greater in the 4-band condition than in the 8-band condition, whereas children showed the opposite pattern. Gestures help children to disambiguate degraded speech, but children need more phonological information than adults to benefit from use of gestures. Children’s multimodal language integration needs to further develop to adapt flexibly to challenging situations such as degraded speech, as tested in our study, or instances where speech is heard with environmental noise or through a face mask.

    Additional information

    supplemental material
  • Senft, G., Östman, J.-O., & Verschueren, J. (Eds.). (2009). Culture and language use. Amsterdam: John Benjamins.
  • Senft, G. (1991). [Review of the book Einführung in die deskriptive Linguistik by Michael Dürr and Peter Schlobinski]. Linguistics, 29, 722-725.
  • Senft, G. (2009). [Review of the book Geschichten und Gesänge von der Insel Nias in Indonesien ed. by Johannes Maria Hämmerle]. Rundbrief - Forum für Mitglieder des Pazifik-Netzwerkes e.V., 78/09, 29-31.
  • Senft, G. (1991). [Review of the book The sign languages of Aboriginal Australia by Adam Kendon]. Journal of Pragmatics, 15, 400-405. doi:10.1016/0378-2166(91)90040-5.
  • Senft, G. (1987). Kilivila color terms. Studies in Language, 11, 313-346.
  • Senft, G. (1987). Nanam'sa Bwena - Gutes Denken: Eine ethnolinguistische Fallstudie über eine Dorfversammlung auf den Trobriand Inseln. Zeitschrift für Ethnologie, 112, 181-222.
  • Senft, G. (1991). Network models to describe the Kilivila classifier system. Oceanic Linguistics, 30, 131-155. Retrieved from http://www.jstor.org/stable/3623085.
  • Senft, G., & Basso, E. B. (Eds.). (2009). Ritual communication. Oxford: Berg.
  • Senft, G. (1987). Rituelle Kommunikation auf den Trobriand Inseln. Zeitschrift für Literaturwissenschaft und Linguistik, 65, 105-130.
  • Senft, G. (1987). The system of classificatory particles in Kilivila reconsidered: First results on its inventory, its acquisition, and its usage. Language and Linguistics in Melanesia, 16, 100-125.
  • Seuren, P. A. M. (1987). A note on siki. Journal of Pidgin and Creole Languages, 2(1), 57-62. doi:10.1075/jpcl.2.1.07pie.
  • Seuren, P. A. M. (2009). Concerning the roots of transformational generative grammar [Review article]. Historiographia Linguistica, 36, 97-115. doi:10.1075/hl.36.1.05seu.
  • Seuren, P. A. M. (1973). [Review of the book A comprehensive etymological dictionary of the English language by Ernst Klein]. Neophilologus, 57(4), 423-426. doi:10.1007/BF01515518.
  • Seuren, P. A. M. (1973). [Review of the book Philosophy of language by Robert J. Clack and Bertrand Russell]. Foundations of Language, 9(3), 440-441.
  • Seuren, P. A. M. (1973). [Review of the book Semantics. An interdisciplinary reader in philosophy, linguistics and psychology ed. by Danny D. Steinberg and Leon A. Jakobovits]. Neophilologus, 57(2), 198-213. doi:10.1007/BF01514332.
  • Seuren, P. A. M. (1973). Generative Semantik: Semantische syntax. Düsseldorf: Schwann Verlag.
  • Seuren, P. A. M. (1987). How relevant?: A commentary on Sperber and Wilson "Précis of relevance: Communication and cognition'. Behavioral and Brain Sciences, 10, 731-733. doi:10.1017/S0140525X00055564.
  • Seuren, P. A. M. (1963). Naar aanleiding van Dr. F. Balk-Smit Duyzentkunst "De Grammatische Functie". Levende Talen, 219, 179-186.
  • Seuren, P. A. M. (2009). Language from within: Vol. 1. Language in cognition. Oxford: Oxford University Press.

    Abstract

    Language in Cognition argues that language is based on the human construal of reality. Humans refer to and quantify over virtual entities with the same ease as they do over actual entities: the natural ontology of language, the author argues, must therefore comprise both actual and virtual entities and situations. He reformulates speech act theory, suggesting that the primary function of language is less the transfer of information than the establishing of socially binding commitments or appeals based on the proposition expressed. This leads him first to a new analysis of the systems and structures of cognitive language machinery and their ecological embedding, and finally to a reformulation of the notion of meaning, in which sentence meaning is distinguished from lexical meaning and the vagaries and multifarious applications of lexical meanings may be explained and understood. This is the first of a two-volume foundational study of language, published under the title, Language from Within. Pieter Seuren discusses and analyses such apparently diverse issues as the ontology underlying the semantics of language, speech act theory, intensionality phenomena, the machinery and ecology of language, sentential and lexical meaning, the natural logic of language and cognition, and the intrinsically context-sensitive nature of language - and shows them to be intimately linked. Throughout his ambitious enterprise, he maintains a constant dialogue with established views, reflecting on their development from Ancient Greece to the present. The resulting synthesis concerns central aspects of research and theory in linguistics, philosophy, and cognitive science.
  • Seuren, P. A. M. (1987). Les paradoxes et le langage. Logique et Analyse, 30(120), 365-383.
  • Seuren, P. A. M. (1991). Grammatika als algorithme: Rekenen met taal. Koninklijke Nederlandse Akademie van Wetenschappen. Mededelingen van de Afdeling Letterkunde, Nieuwe Reeks, 54(2), 25-63.
  • Seuren, P. A. M. (2009). The clitics mechanism in French and Italian. Probus, 21(1), 83-142. doi:10.1515/prbs.2009.004.

    Abstract

    The article concentrates on the question of the composition, the internal ordering and the placement of clitic-clusters (C-clusters) in French and Italian, though clitic data from other languages are drawn in occasionally. The system proposed is top-down transformational, in the terms of Semantic Syntax (Seuren, Blackwell, 1996). Clitics are taken to originate in underlying structure as canonical argument terms or adverbial constituents of clauses. During the process of transformation from semantic to surface form, nonfocus, nonsubject, pronominal argument terms are assigned values for the features of animacy ([±an]), dative status ([±dat]) and reflexivity ([±refl]). On the basis of these, the rule feature cm, inducing clitic movement, is assigned or withheld. Plus-values increase, and minus-values reduce, the “semantic weight” of the clitics in question. Pronouns without the feature cm are not cliticised and stay in their canonical term position in their full phonological form. Pronouns with the feature cm are attached to the nearest verb form giving rise to clitic clusters, which accounts for the composition of well-formed C-clusters. The attachment of clitics to a cluster occurs in a fixed order, which accounts for the ordering of clitics in well-formed clusters. Branching directionality, together with a theory of complementation, accounts for the placement of C-clusters. Clitics often take on a reduced phonological form. It is argued that, in French and Italian, which are languages with a right-branching syntax and a left-branching flectional morphology, postverbal clitics, or enclitics, are part of left-branching structures and hence fit naturally into the morphology. They are best categorised as affixes. Occasionally, as in Italian glielo, dative clitics (e.g., gli) turn preceding lighter clitics (e.g., lo) into affixes, resulting in the left-branching structure glielo, where -lo is an affix. In a brief Intermezzo, instances are shown of the irregular but revealing lui-le-lui phenomenon in French, and its much less frequent analog in Italian. On these assumptions, supported by the official orthographies, the clitic systems of French and Italian largely coincide. This new analysis of the facts in question invites further reflection on the interface between syntax and morphology. The final section deals with reflexive clitics. There, the system begins to be unable to account for the observed facts. At this end, therefore, the system is allowed to remain fraying, till further research brings greater clarity.
  • Seuren, P. A. M. (Ed.). (1974). Semantic syntax. Oxford: Oxford University Press.
  • Seuren, P. A. M., & Hamans, C. (2009). Semantic conditioning of syntactic rules: Evidentiality and auxiliation in English and Dutch. Folia Linguistica, 43(1), 135-169. doi:10.1515/FLIN.2009.004.

    Abstract

    Ever since the category of evidentiality has been identified in the verbal grammar of certain languages, it has been assumed that evidentiality plays no role in the grammars of those languages that have not incorporated it into their verb morphology or at least their verb clusters. The present paper attempts to show that even if evidentiality is not visible in the verbal grammar of English and Dutch, it appears to be a motivating factor, both historically and synchronically, in the process whereby evidential predicates are made to play a subordinate syntactic role with regard to their embedded subject clause. This process, known as AUXILIATION (Kuteva 2001), appears to manifest itself in a variety of, often successive, grammatical processes or rules, such as Subject-to-Subject Raising (the subject of the embedded clause becomes the subject of the main verb, as in John is likely to be late), V-ING (as in The man stopped breathing), Incorporation-by-Lowering (the evidential main verb is lowered on to the V-constituent of the embedded subject clause, as in John may have left), or Incorporation-by-Raising (also known as Predicate Raising), not or hardly attested in English but dominant in Dutch. A list is provided of those English (and Dutch) predicates that induce one of the above-mentioned auxiliation rules and it is checked how many of those have an evidential meaning. This is set off against evidential predicates that do not induce an auxiliation rule. It results that, for English and Dutch, lexical evidentiality is a powerful determinant for the induction of syntactic auxiliation.
  • Seuren, P. A. M. (1973). Predicate raising and dative in French and Sundry languages. Trier: L.A.U.T. (Linguistic Agency University of Trier).
  • Seuren, P. A. M. (1973). Zero-output rules. Foundations of Language, 10(2), 317-328.
  • Seuren, P. A. M. (1975). Tussen taal en denken: Een bijdrage tot de empirische funderingen van de semantiek. Utrecht: Oosthoek, Scheltema & Holkema.
  • Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2024). Your “VOORnaam” is not my “VOORnaam”: An acoustic analysis of individual talker differences in word stress in Dutch. Journal of Phonetics, 103: 101296. doi:10.1016/j.wocn.2024.101296.

    Abstract

    Different talkers speak differently, even within the same homogeneous group. These differences lead to acoustic variability in speech, causing challenges for correct perception of the intended message. Because previous descriptions of this acoustic variability have focused mostly on segments, talker variability in prosodic structures is not yet well documented. The present study therefore examined acoustic between-talker variability in word stress in Dutch. We recorded 40 native Dutch talkers from a participant sample with minimal dialectal variation and balanced gender, producing segmentally overlapping words (e.g., VOORnaam vs. voorNAAM; ‘first name’ vs. ‘respectable’, capitalization indicates lexical stress), and measured different acoustic cues to stress. Each individual participant’s acoustic measurements were analyzed using Linear Discriminant Analyses, which provide coefficients for each cue, reflecting the strength of each cue in a talker’s productions. On average, talkers primarily used mean F0, intensity, and duration. Moreover, each participant also employed a unique combination of cues, illustrating large prosodic variability between talkers. In fact, classes of cue-weighting tendencies emerged, differing in which cue was used as the main cue. These results offer the most comprehensive acoustic description, to date, of word stress in Dutch, and illustrate that large prosodic variability is present between individual talkers.
  • Shan, W., Zhang, Y., Zhao, J., Wu, S., Zhao, L., Ip, P., Tucker, J. D., & Jiang, F. (2024). Positive parent–child interactions moderate certain maltreatment effects on psychosocial well-being in 6-year-old children. Pediatric Research, 95, 802-808. doi:10.1038/s41390-023-02842-5.

    Abstract

    Background: Positive parental interactions may buffer maltreated children from poor psychosocial outcomes. The study aims to evaluate the associations between various types of maltreatment and psychosocial outcomes in early childhood, and examine the moderating effect of positive parent-child interactions on them.

    Methods: Data were from a representative Chinese 6-year-old children sample (n = 17,088). Caregivers reported the history of child maltreatment perpetrated by any individuals, completed the Strengths and Difficulties Questionnaire as a proxy for psychosocial well-being, and reported the frequency of their interactions with children by the Chinese Parent-Child Interaction Scale.

    Results: Physical abuse, emotional abuse, neglect, and sexual abuse were all associated with higher odds of psychosocial problems (aOR = 1.90 [95% CI: 1.57-2.29], aOR = 1.92 [95% CI: 1.75-2.10], aOR = 1.64 [95% CI: 1.17-2.30], aOR = 2.03 [95% CI: 1.30-3.17]). Positive parent-child interactions were associated with lower odds of psychosocial problems after accounting for different types of maltreatment. The moderating effect of frequent parent-child interactions was found only in the association between occasional only physical abuse and psychosocial outcomes (interaction term: aOR = 0.34, 95% CI: 0.15-0.77).

    Conclusions: Maltreatment and positive parent-child interactions have impacts on psychosocial well-being in early childhood. Positive parent-child interactions could only buffer the adverse effect of occasional physical abuse on psychosocial outcomes. More frequent parent-child interactions may be an important intervention opportunity among some children.

    Impact: It provides the first data on the prevalence of different single types and combinations of maltreatment in early childhood in Shanghai, China by drawing on a city-level population-representative sample. It adds to evidence that different forms and degrees of maltreatment were all associated with a higher risk of psychosocial problems in early childhood. Among them, sexual abuse posed the highest risk, followed by emotional abuse. It innovatively found that higher frequencies of parent-child interactions may provide buffering effects only to children who are exposed to occasional physical abuse. It provides a potential intervention opportunity, especially for physically abused children.
  • Silverstein, P., Bergmann, C., & Syed, M. (Eds.). (2024). Open science and metascience in developmental psychology [Special Issue]. Infant and Child Development, 33(1).
  • Silverstein, P., Bergmann, C., & Syed, M. (2024). Open science and metascience in developmental psychology: Introduction to the special issue. Infant and Child Development, 33(1): e2495. doi:10.1002/icd.2495.
  • Simon-Thomas, E. R., Keltner, D. J., Sauter, D., Sinicropi-Yao, L., & Abramson, A. (2009). The voice conveys specific emotions: Evidence from vocal burst displays. Emotion, 9, 838-846. doi:10.1037/a0017810.

    Abstract

    Studies of emotion signaling inform claims about the taxonomic structure, evolutionary origins, and physiological correlates of emotions. Emotion vocalization research has tended to focus on a limited set of emotions: anger, disgust, fear, sadness, surprise, happiness, and for the voice, also tenderness. Here, we examine how well brief vocal bursts can communicate 22 different emotions: 9 negative (Study 1) and 13 positive (Study 2), and whether prototypical vocal bursts convey emotions more reliably than heterogeneous vocal bursts (Study 3). Results show that vocal bursts communicate emotions like anger, fear, and sadness, as well as seldom-studied states like awe, compassion, interest, and embarrassment. Ancillary analyses reveal family-wise patterns of vocal burst expression. Errors in classification were more common within emotion families (e.g., ‘self-conscious,’ ‘pro-social’) than between emotion families. The three studies reported highlight the voice as a rich modality for emotion display that can inform fundamental constructs about emotion.
  • Slonimska, A. (2024). The role of iconicity and simultaneity in efficient communication in the visual modality: Evidence from LIS (Italian Sign Language) [Dissertation Abstract]. Sign Language & Linguistics, 27(1), 116-124. doi:10.1075/sll.00084.slo.
  • Snijders, T. M., Vosse, T., Kempen, G., Van Berkum, J. J. A., Petersson, K. M., & Hagoort, P. (2009). Retrieval and unification of syntactic structure in sentence comprehension: An fMRI study using word-category ambiguity. Cerebral Cortex, 19, 1493-1503. doi:10.1093/cercor/bhn187.

    Abstract

    Sentence comprehension requires the retrieval of single word information from long-term memory, and the integration of this information into multiword representations. The current functional magnetic resonance imaging study explored the hypothesis that the left posterior temporal gyrus supports the retrieval of lexical-syntactic information, whereas left inferior frontal gyrus (LIFG) contributes to syntactic unification. Twenty-eight subjects read sentences and word sequences containing word-category (noun–verb) ambiguous words at critical positions. Regions contributing to the syntactic unification process should show enhanced activation for sentences compared to words, and only within sentences display a larger signal for ambiguous than unambiguous conditions. The posterior LIFG showed exactly this predicted pattern, confirming our hypothesis that LIFG contributes to syntactic unification. The left posterior middle temporal gyrus was activated more for ambiguous than unambiguous conditions (main effect over both sentences and word sequences), as predicted for regions subserving the retrieval of lexical-syntactic information from memory. We conclude that understanding language involves the dynamic interplay between left inferior frontal and left posterior temporal regions.

    Additional information

    suppl1.pdf suppl2_dutch_stimulus.pdf
  • Soheili-Nezhad, S., Ibáñez-Solé, O., Izeta, A., Hoeijmakers, J. H. J., & Stoeger, T. (2024). Time is ticking faster for long genes in aging. Trends in Genetics, 40(4), 299-312. doi:10.1016/j.tig.2024.01.009.

    Abstract

    Recent studies of aging organisms have identified a systematic phenomenon, characterized by a negative correlation between gene length and their expression in various cell types, species, and diseases. We term this phenomenon gene-length-dependent transcription decline (GLTD) and suggest that it may represent a bottleneck in the transcription machinery and thereby significantly contribute to aging as an etiological factor. We review potential links between GLTD and key aging processes such as DNA damage and explore their potential in identifying disease modification targets. Notably, in Alzheimer’s disease, GLTD spotlights extremely long synaptic genes at chromosomal fragile sites (CFSs) and their vulnerability to postmitotic DNA damage. We suggest that GLTD is an integral element of biological aging.
  • Stewart, A. J., Kidd, E., & Haigh, M. (2009). Early sensitivity to discourse-level anomalies: Evidence from self-paced reading. Discourse Processes, 46(1), 46-69. doi:10.1080/01638530802629091.

    Abstract

    Two word-by-word, self-paced reading experiments investigated the speed with which readers were sensitive to discourse-level anomalies. An account arguing for delayed sensitivity (Guzman & Klin, 2000 Guzman, A. E. and Klin, C. M. 2000. Maintaining global coherence in reading: The role of sentence boundaries.. Memory & Cognition, 28: 722–730. [PubMed], [Web of Science ®], [Google Scholar]) was contrasted with one allowing for rapid sensitivity (Myers & O'Brien, 1998 Myers, J. L. and O'Brien, E. J. 1998. Accessing the discourse representation during reading.. Discourse Processes, 26: 131–157. [Taylor & Francis Online], [Web of Science ®], [Google Scholar]). Anomalies related to spatial information (Experiment 1) and character-attribute information (Experiment 2) were examined. Both experiments found that readers displayed rapid sensitivity to the anomalous information. A reading time penalty was observed for the region of text containing the anomalous information. This finding is most compatible with an account of text processing whereby incoming words are rapidly evaluated with respect to prior context. They are not consistent with an account that argues for delayed integration. Results are discussed in light of their implications for competing models of text processing.
  • Stewart, A. J., Haigh, M., & Kidd, E. (2009). An investigation into the online processing of counterfactual and indicative conditionals. Quarterly Journal of Experimental Psychology, 62(11), 2113-2125. doi:10.1080/17470210902973106.

    Abstract

    The ability to represent conditional information is central to human cognition. In two self-paced reading experiments we investigated how readers process counterfactual conditionals (e.g., If Darren had been athletic, he could probably have played on the rugby team ) and indicative conditionals (e.g., If Darren is athletic, he probably plays on the rugby team ). In Experiment 1 we focused on how readers process counterfactual conditional sentences. We found that processing of the antecedent of counterfactual conditionals was rapidly constrained by prior context (i.e., knowing whether Darren was or was not athletic). A reading-time penalty was observed for the critical region of text comprising the last word of the antecedent and the first word of the consequent when the information in the antecedent did not fit with prior context. In Experiment 2 we contrasted counterfactual conditionals with indicative conditionals. For counterfactual conditionals we found the same effect on the critical region as we found in Experiment 1. In contrast, however, we found no evidence that processing of the antecedent of indicative conditionals was constrained by prior context. For indicative conditionals (but not for counterfactual conditionals), the results we report are consistent with the suppositional account of conditionals. We propose that current theories of conditionals need to be able to account for online processing differences between indicative and counterfactual conditionals
  • Stivers, T., Enfield, N. J., Brown, P., Englert, C., Hayashi, M., Heinemann, T., Hoymann, G., Rossano, F., De Ruiter, J. P., Yoon, K.-E., & Levinson, S. C. (2009). Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences of the United States of America, 106 (26), 10587-10592. doi:10.1073/pnas.0903616106.

    Abstract

    Informal verbal interaction is the core matrix for human social life. A mechanism for coordinating this basic mode of interaction is a system of turn-taking that regulates who is to speak and when. Yet relatively little is known about how this system varies across cultures. The anthropological literature reports significant cultural differences in the timing of turn-taking in ordinary conversation. We test these claims and show that in fact there are striking universals in the underlying pattern of response latency in conversation. Using a worldwide sample of 10 languages drawn from traditional indigenous communities to major world languages, we show that all of the languages tested provide clear evidence for a general avoidance of overlapping talk and a minimization of silence between conversational turns. In addition, all of the languages show the same factors explaining within-language variation in speed of response. We do, however, find differences across the languages in the average gap between turns, within a range of 250 ms from the cross-language mean. We believe that a natural sensitivity to these tempo differences leads to a subjective perception of dramatic or even fundamental differences as offered in ethnographic reports of conversational style. Our empirical evidence suggests robust human universals in this domain, where local variations are quantitative only, pointing to a single shared infrastructure for language use with likely ethological foundations.

    Additional information

    Stivers_2009_universals_suppl.pdf
  • Stivers, T., Chalfoun, A., & Rossi, G. (2024). To err is human but to persist is diabolical: Toward a theory of interactional policing. Frontiers in Sociology: Sociological Theory, 9: 1369776. doi:10.3389/fsoc.2024.1369776.

    Abstract

    Social interaction is organized around norms and preferences that guide our construction of actions and our interpretation of those of others, creating a reflexive moral order. Sociological theory suggests two possibilities for the type of moral order that underlies the policing of interactional norm and preference violations: a morality that focuses on the nature of violations themselves and a morality that focuses on the positioning of actors as they maintain their conduct comprehensible, even when they depart from norms and preferences. We find that actors are more likely to reproach interactional violations for which an account is not provided by the transgressor, and that actors weakly reproach or let pass first offenses while more strongly policing violators who persist in bad behavior. Based on these findings, we outline a theory of interactional policing that rests not on the nature of the violation but rather on actors' moral positioning.
  • Tagliapietra, L., Fanari, R., De Candia, C., & Tabossi, P. (2009). Phonotactic regularities in the segmentation of spoken Italian. Quarterly Journal of Experimental Psychology, 62(2), 392 -415. doi:10.1080/17470210801907379.

    Abstract

    Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners' sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittaacute in cittaacuteu.ba) than when they were aligned (e.g., cittaacute in cittaacute.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners' sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.

    Files private

    Request files
  • Tagliapietra, L., Fanari, R., Collina, S., & Tabossi, P. (2009). Syllabic effects in Italian lexical access. Journal of Psycholinguistic Research, 38(6), 511-526. doi:10.1007/s10936-009-9116-4.

    Abstract

    Two cross-modal priming experiments tested whether lexical access is constrained by syllabic structure in Italian. Results extend the available Italian data on the processing of stressed syllables showing that syllabic information restricts the set of candidates to those structurally consistent with the intended word (Experiment 1). Lexical access, however, takes place as soon as possible and it is not delayed till the incoming input corresponds to the first syllable of the word. And, the initial activated set includes candidates whose syllabic structure does not match the intended word (Experiment 2). The present data challenge the early hypothesis that in Romance languages syllables are the units for lexical access during spoken word recognition. The implications of the results for our understanding of the role of syllabic information in language processing are discussed.
  • Takashima, A., Carota, F., Schoots, V., Redmann, A., Jehee, J., & Indefrey, P. (2024). Tomatoes are red: The perception of achromatic objects elicits retrieval of associated color knowledge. Journal of Cognitive Neuroscience, 36(1), 24-45. doi:10.1162/jocn_a_02068.

    Abstract

    When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri (“Human V4”) correlated with a representational model encoding the red–green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named.
  • Tamaoka, K., Yu, S., Zhang, J., Otsuka, Y., Lim, H., Koizumi, M., & Verdonschot, R. G. (2024). Syntactic structures in motion: Investigating word order variations in verb-final (Korean) and verb-initial (Tongan) languages. Frontiers in Psychology, 15: 1360191. doi:10.3389/fpsyg.2024.1360191.

    Abstract

    This study explored sentence processing in two typologically distinct languages: Korean, a verb-final language, and Tongan, a verb-initial language. The first experiment revealed that in Korean, sentences arranged in the scrambled OSV (Object, Subject, Verb) order were processed more slowly than those in the canonical SOV order, highlighting a scrambling effect. It also found that sentences with subject topicalization in the SOV order were processed as swiftly as those in the canonical form, whereas sentences with object topicalization in the OSV order were processed with speeds and accuracy comparable to scrambled sentences. However, since topicalization and scrambling in Korean use the same OSV order, independently distinguishing the effects of topicalization is challenging. In contrast, Tongan allows for a clear separation of word orders for topicalization and scrambling, facilitating an independent evaluation of topicalization effects. The second experiment, employing a maze task, confirmed that Tongan’s canonical VSO order was processed more efficiently than the VOS scrambled order, thereby verifying a scrambling effect. The third experiment investigated the effects of both scrambling and topicalization in Tongan, finding that the canonical VSO order was processed most efficiently in terms of speed and accuracy, unlike the VOS scrambled and SVO topicalized orders. Notably, the OVS object-topicalized order was processed as efficiently as the VSO canonical order, while the SVO subject-topicalized order was slower than VSO but faster than VOS. By independently assessing the effects of topicalization apart from scrambling, this study demonstrates that both subject and object topicalization in Tongan facilitate sentence processing, contradicting the predictions based on movement-based anticipation.

    Additional information

    appendix 1-3
  • Ten Oever, S., Titone, L., te Rietmolen, N., & Martin, A. E. (2024). Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. PNAS, 121(3): e2320489121. doi:10.1073/pnas.2320489121.

    Abstract

    Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
  • Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.

    Abstract

    From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
  • Ter Avest, I. J., & Mulder, K. (2009). The Acquisition of Gender Agreement in the Determiner Phrase by Bilingual Children. Toegepaste Taalwetenschap in Artikelen, 81(1), 133-142.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.

    Abstract

    Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity.
  • Terrill, A. (2009). [Review of Felix K. Ameka, Alan Dench, and Nicholas Evans (eds). 2006. Catching language: The standing challenge of grammar writing]. Language Documentation & Conservation, 3(1), 132-137. Retrieved from http://hdl.handle.net/10125/4432.
  • Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Kan, C. C., Tendolkar, I., & Hagoort, P. (2009). Neural correlates of pragmatic language comprehension in autism disorders. Brain, 132, 1941-1952. doi:10.1093/brain/awp103.

    Abstract

    Difficulties with pragmatic aspects of communication are universal across individuals with autism spectrum disorders (ASDs). Here we focused on an aspect of pragmatic language comprehension that is relevant to social interaction in daily life: the integration of speaker characteristics inferred from the voice with the content of a message. Using functional magnetic resonance imaging (fMRI), we examined the neural correlates of the integration of voice-based inferences about the speaker’s age, gender or social background, and sentence content in adults with ASD and matched control participants. Relative to the control group, the ASD group showed increased activation in right inferior frontal gyrus (RIFG; Brodmann area 47) for speakerincongruent sentences compared to speaker-congruent sentences. Given that both groups performed behaviourally at a similar level on a debriefing interview outside the scanner, the increased activation in RIFG for the ASD group was interpreted as being compensatory in nature. It presumably reflects spill-over processing from the language dominant left hemisphere due to higher task demands faced by the participants with ASD when integrating speaker characteristics and the content of a spoken sentence. Furthermore, only the control group showed decreased activation for speaker-incongruent relative to speaker-congruent sentences in right ventral medial prefrontal cortex (vMPFC; Brodmann area 10), including right anterior cingulate cortex (ACC; Brodmann area 24/32). Since vMPFC is involved in self-referential processing related to judgments and inferences about self and others, the absence of such a modulation in vMPFC activation in the ASD group possibly points to atypical default self-referential mental activity in ASD. Our results show that in ASD compensatory mechanisms are necessary in implicit, low-level inferential processes in spoken language understanding. This indicates that pragmatic language problems in ASD are not restricted to high-level inferential processes, but encompass the most basic aspects of pragmatic language processing.
  • Tesink, C. M. J. Y., Petersson, K. M., Van Berkum, J. J. A., Van den Brink, D., Buitelaar, J. K., & Hagoort, P. (2009). Unification of speaker and meaning in language comprehension: An fMRI study. Journal of Cognitive Neuroscience, 21, 2085-2099. doi:10.1162/jocn.2008.21161.

    Abstract

    When interpreting a message, a listener takes into account several sources of linguistic and extralinguistic information. Here we focused on one particular form of extralinguistic information, certain speaker characteristics as conveyed by the voice. Using functional magnetic resonance imaging, we examined the neural structures involved in the unification of sentence meaning and voice-based inferences about the speaker's age, sex, or social background. We found enhanced activation in the inferior frontal gyrus bilaterally (BA 45/47) during listening to sentences whose meaning was incongruent with inferred speaker characteristics. Furthermore, our results showed an overlap in brain regions involved in unification of speaker-related information and those used for the unification of semantic and world knowledge information [inferior frontal gyrus bilaterally (BA 45/47) and left middle temporal gyrus (BA 21)]. These findings provide evidence for a shared neural unification system for linguistic and extralinguistic sources of information and extend the existing knowledge about the role of inferior frontal cortex as a crucial component for unification during language comprehension.
  • Theakston, A., & Rowland, C. F. (2009). Introduction to Special Issue: Cognitive approaches to language acquisition. Cognitive Linguistics, 20(3), 477-480. doi:10.1515/COGL.2009.021.
  • Theakston, A. L., & Rowland, C. F. (2009). The acquisition of auxiliary syntax: A longitudinal elicitation study. Part 1: Auxiliary BE. Journal of Speech, Language, and Hearing Research, 52, 1449-1470. doi:10.1044/1092-4388(2009/08-0037).

    Abstract

    Purpose: The question of how and when English-speaking children acquire auxiliaries is the subject of extensive debate. Some researchers posit the existence of innately given Universal Grammar principles to guide acquisition, although some aspects of the auxiliary system must be learned from the input. Others suggest that auxiliaries can be learned without Universal Grammar, citing evidence of piecemeal learning in their support. This study represents a unique attempt to trace the development of auxiliary syntax by using a longitudinal elicitation methodology. Method: Twelve English-speaking children participated in 3 tasks designed to elicit auxiliary BE in declaratives and yes/no and wh-questions. They completed each task 6 times in total between the ages of 2;10 (years;months) and 3;6. Results: The children’s levels of correct use of 2 forms of BE (is,are) differed according to auxiliary form and sentence structure, and these relations changed over development. An analysis of the children’s errors also revealed complex interactions between these factors. Conclusion: These data are problematic for existing accounts of auxiliary acquisition and highlight the need for researchers working within both generativist and constructivist frameworks to develop more detailed theories of acquisition that directly predict the pattern of acquisition observed.
  • Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.

    Abstract

    Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.

    Additional information

    supplementary information
  • Timpson, N. J., Tobias, J. H., Richards, J. B., Soranzo, N., Duncan, E. L., Sims, A.-M., Whittaker, P., Kumanduri, V., Zhai, G., Glaser, B., Eisman, J., Jones, G., Nicholson, G., Prince, R., Seeman, E., Spector, T. D., Brown, M. A., Peltonen, L., Smith, G. D., Deloukas, P. and 1 moreTimpson, N. J., Tobias, J. H., Richards, J. B., Soranzo, N., Duncan, E. L., Sims, A.-M., Whittaker, P., Kumanduri, V., Zhai, G., Glaser, B., Eisman, J., Jones, G., Nicholson, G., Prince, R., Seeman, E., Spector, T. D., Brown, M. A., Peltonen, L., Smith, G. D., Deloukas, P., & Evans, D. M. (2009). Common variants in the region around Osterix are associated with bone mineral density and growth in childhood. Human Molecular Genetics, 18(8), 1510-1517. doi:10.1093/hmg/ddp052.

    Abstract

    Peak bone mass achieved in adolescence is a determinant of bone mass in later life. In order to identify genetic variants affecting bone mineral density (BMD), we performed a genome-wide association study of BMD and related traits in 1518 children from the Avon Longitudinal Study of Parents and Children (ALSPAC). We compared results with a scan of 134 adults with high or low hip BMD. We identified associations with BMD in an area of chromosome 12 containing the Osterix (SP7) locus, a transcription factor responsible for regulating osteoblast differentiation (ALSPAC: P = 5.8 x 10(-4); Australia: P = 3.7 x 10(-4)). This region has previously shown evidence of association with adult hip and lumbar spine BMD in an Icelandic population, as well as nominal association in a UK population. A meta-analysis of these existing studies revealed strong association between SNPs in the Osterix region and adult lumbar spine BMD (P = 9.9 x 10(-11)). In light of these findings, we genotyped a further 3692 individuals from ALSPAC who had whole body BMD and confirmed the association in children as well (P = 5.4 x 10(-5)). Moreover, all SNPs were related to height in ALSPAC children, but not weight or body mass index, and when height was included as a covariate in the regression equation, the association with total body BMD was attenuated. We conclude that genetic variants in the region of Osterix are associated with BMD in children and adults probably through primary effects on growth.
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Trilsbeek, P., & Van Uytvanck, D. (2009). Regional archives and community portals. IASA Journal, 32, 69-73.
  • Trujillo, J. P., & Holler, J. (2024). Information distribution patterns in naturalistic dialogue differ across languages. Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-024-02452-0.

    Abstract

    The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing–planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e.,time between speaker turns)

    Additional information

    Data availability
  • Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.

    Abstract

    Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.

    Additional information

    41598_2024_52589_MOESM1_ESM.docx
  • Tyler, M., & Cutler, A. (2009). Cross-language differences in cue use for speech segmentation. Journal of the Acoustical Society of America, 126, 367-376. doi:10.1121/1.3129127.

    Abstract

    Two artificial-language learning experiments directly compared English, French, and Dutch listeners’ use of suprasegmental cues for continuous-speech segmentation. In both experiments, listeners heard unbroken sequences of consonant-vowel syllables, composed of recurring three- and four-syllable “words.” These words were demarcated by(a) no cue other than transitional probabilities induced by their recurrence, (b) a consistent left-edge cue, or (c) a consistent right-edge cue. Experiment 1 examined a vowel lengthening cue. All three listener groups benefited from this cue in right-edge position; none benefited from it in left-edge position. Experiment 2 examined a pitch-movement cue. English listeners used this cue in left-edge position, French listeners used it in right-edge position, and Dutch listeners used it in both positions. These findings are interpreted as evidence of both language-universal and language-specific effects. Final lengthening is a language-universal effect expressing a more general (non-linguistic) mechanism. Pitch movement expresses prominence which has characteristically different placements across languages: typically at right edges in French, but at left edges in English and Dutch. Finally, stress realization in English versus Dutch encourages greater attention to suprasegmental variation by Dutch than by English listeners, allowing Dutch listeners to benefit from an informative pitch-movement cue even in an uncharacteristic position.
  • Van Berkum, J. J. A., Holleman, B., Nieuwland, M. S., Otten, M., & Murre, J. (2009). Right or wrong? The brain's fast response to morally objectionable statements. Psychological Science, 20, 1092 -1099. doi:10.1111/j.1467-9280.2009.02411.x.

    Abstract

    How does the brain respond to statements that clash with a person's value system? We recorded event-related brain potentials while respondents from contrasting political-ethical backgrounds completed an attitude survey on drugs, medical ethics, social conduct, and other issues. Our results show that value-based disagreement is unlocked by language extremely rapidly, within 200 to 250 ms after the first word that indicates a clash with the reader's value system (e.g., "I think euthanasia is an acceptable/unacceptable…"). Furthermore, strong disagreement rapidly influences the ongoing analysis of meaning, which indicates that even very early processes in language comprehension are sensitive to a person's value system. Our results testify to rapid reciprocal links between neural systems for language and for valuation.

    Additional information

    Critical survey statements (in Dutch)
  • Van Wijk, C., & Kempen, G. (1987). A dual system for producing self-repairs in spontaneous speech: Evidence from experimentally elicited corrections. Cognitive Psychology, 19, 403-440. doi:10.1016/0010-0285(87)90014-4.

    Abstract

    This paper presents a cognitive theory on the production and shaping of selfrepairs during speaking. In an extensive experimental study, a new technique is tried out: artificial elicitation of self-repairs. The data clearly indicate that two mechanisms for computing the shape of self-repairs should be distinguished. One is based on the repair strategy called reformulation, the second one on lemma substitution. W. Levelt’s (1983, Cognition, 14, 41- 104) well-formedness rule, which connects self-repairs to coordinate structures, is shown to apply only to reformulations. In case of lemma substitution, a totally different set of rules is at work. The linguistic unit of central importance in reformulations is the major syntactic constituent; in lemma substitutions it is a prosodic unit. the phonological phrase. A parametrization of the model yielded a very satisfactory fit between observed and reconstructed scores.
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van Berkum, J. J. A., Hijne, H., De Jong, T., Van Joolingen, W. R., & Njoo, M. (1991). Aspects of computer simulations in education. Education & Computing, 6(3/4), 231-239.

    Abstract

    Computer simulations in an instructional context can be characterized according to four aspects (themes): simulation models, learning goals, learning processes and learner activity. The present paper provides an outline of these four themes. The main classification criterion for simulation models is quantitative vs. qualitative models. For quantitative models a further subdivision can be made by classifying the independent and dependent variables as continuous or discrete. A second criterion is whether one of the independent variables is time, thus distinguishing dynamic and static models. Qualitative models on the other hand use propositions about non-quantitative properties of a system or they describe quantitative aspects in a qualitative way. Related to the underlying model is the interaction with it. When this interaction has a normative counterpart in the real world we call it a procedure. The second theme of learning with computer simulation concerns learning goals. A learning goal is principally classified along three dimensions, which specify different aspects of the knowledge involved. The first dimension, knowledge category, indicates that a learning goal can address principles, concepts and/or facts (conceptual knowledge) or procedures (performance sequences). The second dimension, knowledge representation, captures the fact that knowledge can be represented in a more declarative (articulate, explicit), or in a more compiled (implicit) format, each one having its own advantages and drawbacks. The third dimension, knowledge scope, involves the learning goal's relation with the simulation domain; knowledge can be specific to a particular domain, or generalizable over classes of domains (generic). A more or less separate type of learning goal refers to knowledge acquisition skills that are pertinent to learning in an exploratory environment. Learning processes constitute the third theme. Learning processes are defined as cognitive actions of the learner. Learning processes can be classified using a multilevel scheme. The first (highest) of these levels gives four main categories: orientation, hypothesis generation, testing and evaluation. Examples of more specific processes are model exploration and output interpretation. The fourth theme of learning with computer simulations is learner activity. Learner activity is defined as the ‘physical’ interaction of the learner with the simulations (as opposed to the mental interaction that was described in the learning processes). Five main categories of learner activity are distinguished: defining experimental settings (variables, parameters etc.), interaction process choices (deciding a next step), collecting data, choice of data presentation and metacontrol over the simulation.
  • Van Berkum, J. J. A., & De Jong, T. (1991). Instructional environments for simulations. Education & Computing, 6(3/4), 305-358.

    Abstract

    The use of computer simulations in education and training can have substantial advantages over other approaches. In comparison with alternatives such as textbooks, lectures, and tutorial courseware, a simulation-based approach offers the opportunity to learn in a relatively realistic problem-solving context, to practise task performance without stress, to systematically explore both realistic and hypothetical situations, to change the time-scale of events, and to interact with simplified versions of the process or system being simulated. However, learners are often unable to cope with the freedom offered by, and the complexity of, a simulation. As a result many of them resort to an unsystematic, unproductive mode of exploration. There is evidence that simulation-based learning can be improved if the learner is supported while working with the simulation. Constructing such an instructional environment around simulations seems to run counter to the freedom the learner is allowed to in ‘stand alone’ simulations. The present article explores instructional measures that allow for an optimal freedom for the learner. An extensive discussion of learning goals brings two main types of learning goals to the fore: conceptual knowledge and operational knowledge. A third type of learning goal refers to the knowledge acquisition (exploratory learning) process. Cognitive theory has implications for the design of instructional environments around simulations. Most of these implications are quite general, but they can also be related to the three types of learning goals. For conceptual knowledge the sequence and choice of models and problems is important, as is providing the learner with explanations and minimization of error. For operational knowledge cognitive theory recommends learning to take place in a problem solving context, the explicit tracing of the behaviour of the learner, providing immediate feedback and minimization of working memory load. For knowledge acquisition goals, it is recommended that the tutor takes the role of a model and coach, and that learning takes place together with a companion. A second source of inspiration for designing instructional environments can be found in Instructional Design Theories. Reviewing these shows that interacting with a simulation can be a part of a more comprehensive instructional strategy, in which for example also prerequisite knowledge is taught. Moreover, information present in a simulation can also be represented in a more structural or static way and these two forms of presentation provoked to perform specific learning processes and learner activities by tutor controlled variations in the simulation, and by tutor initiated prodding techniques. And finally, instructional design theories showed that complex models and procedures can be taught by starting with central and simple elements of these models and procedures and subsequently presenting more complex models and procedures. Most of the recent simulation-based intelligent tutoring systems involve troubleshooting of complex technical systems. Learners are supposed to acquire knowledge of particular system principles, of troubleshooting procedures, or of both. Commonly encountered instructional features include (a) the sequencing of increasingly complex problems to be solved, (b) the availability of a range of help information on request, (c) the presence of an expert troubleshooting module which can step in to provide criticism on learner performance, hints on the problem nature, or suggestions on how to proceed, (d) the option of having the expert module demonstrate optimal performance afterwards, and (e) the use of different ways of depicting the simulated system. A selection of findings is summarized by placing them under the four themes we think to be characteristic of learning with computer simulations (see de Jong, this volume).
  • Van de Geer, J. P., Levelt, W. J. M., & Plomp, R. (1962). The connotation of musical consonance. Acta Psychologica, 20, 308-319.

    Abstract

    As a preliminary to further research on musical consonance an explanatory investigation was made on the different modes of judgment of musical intervals. This was done by way of a semantic differential. Subjects rated 23 intervals against 10 scales. In a factor analysis three factors appeared: pitch, evaluation and fusion. The relation between these factors and some physical characteristics has been investigated. The scale consonant-dissonant showed to be purely evaluative (in opposition to Stumpf's theory). This evaluative connotation is not in accordance with the musicological meaning of consonance. Suggestions to account for this difference have been given.
  • Van der Veer, G. C., Bagnara, S., & Kempen, G. (1991). Preface. Acta Psychologica, 78, ix. doi:10.1016/0001-6918(91)90002-H.
  • Van Gijn, R. (2009). The phonology of mixed languages. Journal of Pidgin and Creole Languages, 24(1), 91-117. doi:10.1075/jpcl.24.1.04gij.

    Abstract

    Mixed languages are said to be the result of a process of intertwining (e.g. Bakker & Muysken 1995, Bakker 1997), a regular process in which the grammar of one language is combined with the lexicon of another. However, the outcome of this process differs from language pair to language pair. As far as morphosyntax is concerned, people have discussed these different outcomes and the reasons for them extensively, e.g. Bakker 1997 for Michif, Mous 2003 for Ma’a, Muysken 1997a for Media Lengua and 1997b for Callahuaya. The issue of phonology, however, has not generated a large debate. This paper compares the phonological systems of the mixed languages Media Lengua, Callahuaya, Mednyj Aleut, and Michif. It will be argued that the outcome of the process of intertwining, as far as phonology is concerned, is at least partly determined by the extent to which unmixed phonological domains exist.
  • Van Geert, E., Ding, R., & Wagemans, J. (2024). A cross-cultural comparison of aesthetic preferences for neatly organized compositions: Native Chinese- versus Native Dutch-speaking samples. Empirical Studies of the Arts. Advance online publication. doi:10.1177/02762374241245917.

    Abstract

    Do aesthetic preferences for images of neatly organized compositions (e.g., images collected on blogs like Things Organized Neatly©) generalize across cultures? In an earlier study, focusing on stimulus and personal properties related to order and complexity, Western participants indicated their preference for one of two simultaneously presented images (100 pairs). In the current study, we compared the data of the native Dutch-speaking participants from this earlier sample (N = 356) to newly collected data from a native Chinese-speaking sample (N = 220). Overall, aesthetic preferences were quite similar across cultures. When relating preferences for each sample to ratings of order, complexity, soothingness, and fascination collected from a Western, mainly Dutch-speaking sample, the results hint at a cross-culturally consistent preference for images that Western participants rate as more ordered, but a cross-culturally diverse relation between preferences and complexity.
  • Van der Werff, J., Ravignani, A., & Jadoul, Y. (2024). thebeat: A Python package for working with rhythms and other temporal sequences. Behavior Research Methods, 56, 3725-3736. doi:10.3758/s13428-023-02334-8.

    Abstract

    thebeat is a Python package for working with temporal sequences and rhythms in the behavioral and cognitive sciences, as well as in bioacoustics. It provides functionality for creating experimental stimuli, and for visualizing and analyzing temporal data. Sequences, sounds, and experimental trials can be generated using single lines of code. thebeat contains functions for calculating common rhythmic measures, such as interval ratios, and for producing plots, such as circular histograms. thebeat saves researchers time when creating experiments, and provides the first steps in collecting widely accepted methods for use in timing research. thebeat is an open-source, on-going, and collaborative project, and can be extended for use in specialized subfields. thebeat integrates easily with the existing Python ecosystem, allowing one to combine our tested code with custom-made scripts. The package was specifically designed to be useful for both skilled and novice programmers. thebeat provides a foundation for working with temporal sequences onto which additional functionality can be built. This combination of specificity and plasticity should facilitate research in multiple research contexts and fields of study.
  • Vartiainen, J., Aggujaro, S., Lehtonen, M., Hulten, A., Laine, M., & Salmelin, R. (2009). Neural dynamics of reading morphologically complex words. NeuroImage, 47, 2064-2072. doi:10.1016/j.neuroimage.2009.06.002.

    Abstract

    Despite considerable research interest, it is still an open issue as to how morphologically complex words such as “car+s” are represented and processed in the brain. We studied the neural correlates of the processing of inflected nouns in the morphologically rich Finnish language. Previous behavioral studies in Finnish have yielded a robust inflectional processing cost, i.e., inflected words are harder to recognize than otherwise matched morphologically simple words. Theoretically this effect could stem either from decomposition of inflected words into a stem and a suffix at input level and/or from subsequent recombination at the semantic–syntactic level to arrive at an interpretation of the word. To shed light on this issue, we used magnetoencephalography to reveal the time course and localization of neural effects of morphological structure and frequency of written words. Ten subjects silently read high- and low-frequency Finnish words in inflected and monomorphemic form. Morphological complexity was accompanied by stronger and longerlasting activation of the left superior temporal cortex from 200 ms onwards. Earlier effects of morphology were not found, supporting the view that the well-established behavioral processing cost for inflected words stems from the semantic–syntactic level rather than from early decomposition. Since the effect of morphology was detected throughout the range of word frequencies employed, the majority of inflected Finnish words appears to be represented in decomposed form and only very high-frequency inflected words may acquire full-form representations.
  • Verhagen, J., & Schimke, S. (2009). Differences or fundamental differences? Zeitschrift für Sprachwissenschaft, 28(1), 97-106. doi:10.1515/ZFSW.2009.011.
  • Verhagen, J. (2009). Temporal adverbials, negation and finiteness in Dutch as a second language: A scope-based account. IRAL, 47(2), 209-237. doi:10.1515/iral.2009.009.

    Abstract

    This study investigates the acquisition of post-verbal (temporal) adverbials and post-verbal negation in L2 Dutch. It is based on previous findings for L2 French that post-verbal negation poses less of a problem for L2 learners than post-verbal adverbial placement (Hawkins, Towell, Bazergui, Second Language Research 9: 189-233, 1993; Herschensohn, Minimally raising the verb issue: 325-336, Cascadilla Press, 1998). The current data show that, at first sight, Moroccan and Turkish learners of Dutch also have fewer problems with post-verbal negation than with post-verbal adverbials. However, when a distinction is made between different types of adverbials, it seems that this holds for adverbials of position such as 'today' but not for adverbials of contrast such as 'again'. To account for this difference, it is argued that different types of adverbial occupy different positions in the L2 data for reasons of scope marking. Moreover, the placement of adverbials such as 'again' interacts with the acquisition of finiteness marking (resulting in post-verbal placement), while there is no such interaction between adverbials such as 'today' and finiteness marking.
  • Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O. Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O., Saffery, R., Bønnelykke, K., Reilly, S., Pennell, C. E., Wake, M., Cecil, C. A., Plomin, R., Fisher, S. E., & St Pourcain, B. (2024). Genome-wide analyses of vocabulary size in infancy and toddlerhood: Associations with Attention-Deficit/Hyperactivity Disorder and cognition-related traits. Biological Psychiatry, 95(1), 859-869. doi:10.1016/j.biopsych.2023.11.025.

    Abstract

    Background

    The number of words children produce (expressive vocabulary) and understand (receptive vocabulary) changes rapidly during early development, partially due to genetic factors. Here, we performed a meta–genome-wide association study of vocabulary acquisition and investigated polygenic overlap with literacy, cognition, developmental phenotypes, and neurodevelopmental conditions, including attention-deficit/hyperactivity disorder (ADHD).

    Methods

    We studied 37,913 parent-reported vocabulary size measures (English, Dutch, Danish) for 17,298 children of European descent. Meta-analyses were performed for early-phase expressive (infancy, 15–18 months), late-phase expressive (toddlerhood, 24–38 months), and late-phase receptive (toddlerhood, 24–38 months) vocabulary. Subsequently, we estimated single nucleotide polymorphism–based heritability (SNP-h2) and genetic correlations (rg) and modeled underlying factor structures with multivariate models.

    Results

    Early-life vocabulary size was modestly heritable (SNP-h2 = 0.08–0.24). Genetic overlap between infant expressive and toddler receptive vocabulary was negligible (rg = 0.07), although each measure was moderately related to toddler expressive vocabulary (rg = 0.69 and rg = 0.67, respectively), suggesting a multifactorial genetic architecture. Both infant and toddler expressive vocabulary were genetically linked to literacy (e.g., spelling: rg = 0.58 and rg = 0.79, respectively), underlining genetic similarity. However, a genetic association of early-life vocabulary with educational attainment and intelligence emerged only during toddlerhood (e.g., receptive vocabulary and intelligence: rg = 0.36). Increased ADHD risk was genetically associated with larger infant expressive vocabulary (rg = 0.23). Multivariate genetic models in the ALSPAC (Avon Longitudinal Study of Parents and Children) cohort confirmed this finding for ADHD symptoms (e.g., at age 13; rg = 0.54) but showed that the association effect reversed for toddler receptive vocabulary (rg = −0.74), highlighting developmental heterogeneity.

    Conclusions

    The genetic architecture of early-life vocabulary changes during development, shaping polygenic association patterns with later-life ADHD, literacy, and cognition-related traits.
  • Vernes, S. C., MacDermot, K. D., Monaco, A. P., & Fisher, S. E. (2009). Assessing the impact of FOXP1 mutations on developmental verbal dyspraxia. European Journal of Human Genetics, 17(10), 1354-1358. doi:10.1038/ejhg.2009.43.

    Abstract

    Neurodevelopmental disorders that disturb speech and language are highly heritable. Isolation of the underlying genetic risk factors has been hampered by complexity of the phenotype and potentially large number of contributing genes. One exception is the identification of rare heterozygous mutations of the FOXP2 gene in a monogenic syndrome characterised by impaired sequencing of articulatory gestures, disrupting speech (developmental verbal dyspraxia, DVD), as well as multiple deficits in expressive and receptive language. The protein encoded by FOXP2 belongs to a divergent subgroup of forkhead-box transcription factors, with a distinctive DNA-binding domain and motifs that mediate hetero- and homodimerisation. FOXP1, the most closely related member of this subgroup, can directly interact with FOXP2 and is co-expressed in neural structures relevant to speech and language disorders. Moreover, investigations of songbird orthologues indicate that combinatorial actions of the two proteins may play important roles in vocal learning, leading to the suggestion that human FOXP1 should be considered a strong candidate for involvement in DVD. Thus, in this study, we screened the entire coding region of FOXP1 (exons and flanking intronic sequence) for nucleotide changes in a panel of probands used earlier to detect novel mutations in FOXP2. A non-synonymous coding change was identified in a single proband, yielding a proline-to-alanine change (P215A). However, this was also found in a random control sample. Analyses of non-coding SNP changes did not find any correlation with affection status. We conclude that FOXP1 mutations are unlikely to represent a major cause of DVD.

    Additional information

    ejhg200943x1.pdf
  • Vernes, S. C., & Fisher, S. E. (2009). Unravelling neurogenetic networks implicated in developmental language disorders. Biochemical Society Transactions (London), 37, 1263-1269. doi:10.1042/BST0371263.

    Abstract

    Childhood syndromes disturbing language development are common and display high degrees of heritability. In most cases, the underlying genetic architecture is likely to be complex, involving multiple chromosomal loci and substantial heterogeneity, which makes it difficult to track down the crucial genomic risk factors. Investigation of rare Mendelian phenotypes offers a complementary route for unravelling key neurogenetic pathways. The value of this approach is illustrated by the discovery that heterozygous FOXP2 (where FOX is forkhead box) mutations cause an unusual monogenic disorder, characterized by problems with articulating speech along with deficits in expressive and receptive language. FOXP2 encodes a regulatory protein, belonging to the forkhead box family of transcription factors, known to play important roles in modulating gene expression in development and disease. Functional genetics using human neuronal models suggest that the different FOXP2 isoforms generated by alternative splicing have distinct properties and may act to regulate each other's activity. Such investigations have also analysed the missense and nonsense mutations found in cases of speech and language disorder, showing that they alter intracellular localization, DNA binding and transactivation capacity of the mutated proteins. Moreover, in the brains of mutant mice, aetiological mutations have been found to disrupt the synaptic plasticity of Foxp2-expressing circuitry. Finally, although mutations of FOXP2 itself are rare, the downstream networks which it regulates in the brain appear to be broadly implicated in typical forms of language impairment. Thus, through ongoing identification of regulated targets and interacting co-factors, this gene is providing the first molecular entry points into neural mechanisms that go awry in language-related disorders
  • De Vignemont, F., Majid, A., Jola, C., & Haggard, P. (2009). Segmenting the body into parts: Evidence from biases in tactile perception. Quarterly Journal of Experimental Psychology, 62, 500-512. doi:10.1080/17470210802000802.

    Abstract

    How do we individuate body parts? Here, we investigated the effect of body segmentation between hand and arm in tactile and visual perception. In a first experiment, we showed that two tactile stimuli felt farther away when they were applied across the wrist than when they were applied within a single body part (palm or forearm), indicating a “category boundary effect”. In the following experiments, we excluded two hypotheses, which attributed tactile segmentation to other, nontactile factors. In Experiment 2, we showed that the boundary effect does not arise from motor cues. The effect was reduced during a motor task involving flexion and extension movements of the wrist joint. Action brings body parts together into functional units, instead of pulling them apart. In Experiments 3 and 4, we showed that the effect does not arise from perceptual cues of visual discontinuities. We did not find any segmentation effect for the visual percept of the body in Experiment 3, nor for a neutral shape in Experiment 4. We suggest that the mental representation of the body is structured in categorical body parts delineated by joints, and that this categorical representation modulates tactile spatial perception.
  • De Vos, C. (2009). [Review of the book Language complexity as an evolving variable ed. by Geoffrey Sampson, David Gil and Peter Trudgill]. LINGUIST List, 20.4275. Retrieved from http://linguistlist.org/issues/20/20-4275.html.
  • De Vos, C., Van der Kooij, E., & Crasborn, O. (2009). Mixed signals: Combining linguistic and affective functions of eyebrows in questions in Sign Language of the Netherlands. Language and Speech, 52(2/3), 315-339. doi:10.1177/0023830909103177.

    Abstract

    The eyebrows are used as conversational signals in face-to-face spoken interaction (Ekman, 1979). In Sign Language of the Netherlands (NGT), the eyebrows are typically furrowed in content questions, and raised in polar questions (Coerts, 1992). On the other hand, these eyebrow positions are also associated with anger and surprise, respectively, in general human communication (Ekman, 1993). This overlap in the functional load of the eyebrow positions results in a potential conflict for NGT signers when combining these functions simultaneously. In order to investigate the effect of the simultaneous realization of both functions on the eyebrow position we elicited instances of both question types with neutral affect and with various affective states. The data were coded using the Facial Action Coding System (FACS: Ekman, Friesen, & Hager, 2002) for type of brow movement as well as for intensity. FACS allows for the coding of muscle groups, which are termed Action Units (AUs) and which produce facial appearance changes. The results show that linguistic and affective functions of eyebrows may influence each other in NGT. That is, in surprised polar questions and angry content question a phonetic enhancement takes place of raising and furrowing, respectively. In the items with contrasting eyebrow movements, the grammatical and affective AUs are either blended (occur simultaneously) or they are realized sequentially. Interestingly, the absence of eyebrow raising (marked by AU 1+2) in angry polar questions, and the presence of eyebrow furrowing (realized by AU 4) in surprised content questions suggests that in general AU 4 may be phonetically stronger than AU 1 and AU 2, independent of its linguistic or affective function.
  • Vosse, T., & Kempen, G. (2009). In defense of competition during syntactic ambiguity resolution. Journal of Psycholinguistic Research, 38(1), 1-9. doi:10.1007/s10936-008-9075-1.

    Abstract

    In a recent series of publications (Traxler et al. J Mem Lang 39:558–592, 1998; Van Gompel et al. J Mem Lang 52:284–307, 2005; see also Van Gompel et al. (In: Kennedy, et al.(eds) Reading as a perceptual process, Oxford, Elsevier pp 621–648, 2000); Van Gompel et al. J Mem Lang 45:225–258, 2001) eye tracking data are reported showing that globally ambiguous (GA) sentences are read faster than locally ambiguous (LA) counterparts. They argue that these data rule out “constraint-based” models where syntactic and conceptual processors operate concurrently and syntactic ambiguity resolution is accomplished by competition. Such models predict the opposite pattern of reading times. However, this argument against competition is valid only in conjunction with two standard assumptions in current constraint-based models of sentence comprehension: (1) that syntactic competitions (e.g., Which is the best attachment site of the incoming constituent?) are pooled together with conceptual competitions (e.g., Which attachment site entails the most plausible meaning?), and (2) that the duration of a competition is a function of the overall (pooled) quality score obtained by each competitor. We argue that it is not necessary to abandon competition as a successful basis for explaining parsing phenomena and that the above-mentioned reading time data can be accounted for by a parallel-interactive model with conceptual and syntactic processors that do not pool their quality scores together. Within the individual linguistic modules, decision-making can very well be competition-based.
  • Vosse, T., & Kempen, G. (2009). The Unification Space implemented as a localist neural net: Predictions and error-tolerance in a constraint-based parser. Cognitive Neurodynamics, 3, 331-346. doi:10.1007/s11571-009-9094-0.

    Abstract

    We introduce a novel computer implementation of the Unification-Space parser (Vosse & Kempen 2000) in the form of a localist neural network whose dynamics is based on interactive activation and inhibition. The wiring of the network is determined by Performance Grammar (Kempen & Harbusch 2003), a lexicalist formalism with feature unification as binding operation. While the network is processing input word strings incrementally, the evolving shape of parse trees is represented in the form of changing patterns of activation in nodes that code for syntactic properties of words and phrases, and for the grammatical functions they fulfill. The system is capable, at least in a qualitative and rudimentary sense, of simulating several important dynamic aspects of human syntactic parsing, including garden-path phenomena and reanalysis, effects of complexity (various types of clause embeddings), fault-tolerance in case of unification failures and unknown words, and predictive parsing (expectation-based analysis, surprisal effects). English is the target language of the parser described.

Share this page