Publications

Displaying 1001 - 1100 of 1169
  • Sommers, R. P. (2024). Neurobiology of reference. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Sotaro, K., & Dickey, L. W. (Eds.). (1998). Max Planck Institute for Psycholinguistics: Annual report 1998. Nijmegen: Max Planck Institute for Psycholinguistics.
  • De Sousa, H. (2008). The development of echo-subject markers in Southern Vanuatu. In T. J. Curnow (Ed.), Selected papers from the 2007 Conference of the Australian Linguistic Society. Australian Linguistic Society.

    Abstract

    One of the defining features of the Southern Vanuatu language family is the echo-subject (ES) marker (Lynch 2001: 177-178). Canonically, an ES marker indicates that the subject of the clause is coreferential with the subject of the preceding clause. This paper begins with a survey of the various ES systems found in Southern Vanuatu. Two prominent differences amongst the ES systems are: a) the level of obligatoriness of the ES marker; and b) the level of grammatical integration between an ES clauses and the preceding clause. The variation found amongst the ES systems reveals a clear path of grammaticalisation from the VP coordinator *ma in Proto–Southern Vanuatu to the various types of ES marker in contemporary Southern Vanuatu languages
  • Stärk, K. (2024). The company language keeps: How distributional cues influence statistical learning for language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Starreveld, P. A., La Heij, W., & Verdonschot, R. G. (2013). Time course analysis of the effects of distractor frequency and categorical relatedness in picture naming: An evaluation of the response exclusion account. Language and Cognitive Processes, 28(5), 633-654. doi:10.1080/01690965.2011.608026.

    Abstract

    The response exclusion account (REA), advanced by Mahon and colleagues, localises the distractor frequency effect and the semantic interference effect in picture naming at the level of the response output buffer. We derive four predictions from the REA: (1) the size of the distractor frequency effect should be identical to the frequency effect obtained when distractor words are read aloud, (2) the distractor frequency effect should not change in size when stimulus-onset asynchrony (SOA) is manipulated, (3) the interference effect induced by a distractor word (as measured from a nonword control distractor) should increase in size with increasing SOA, and (4) the word frequency effect and the semantic interference effect should be additive. The results of the picture-naming task in Experiment 1 and the word-reading task in Experiment 2 refute all four predictions. We discuss a tentative account of the findings obtained within a traditional selection-by-competition model in which both context effects are localised at the level of lexical selection.
  • Stefansson, H., Rujescu, D., Cichon, S., Pietilainen, O. P. H., Ingason, A., Steinberg, S., Fossdal, R., Sigurdsson, E., Sigmundsson, T., Buizer-Voskamp, J. E., Hansen, T., Jakobsen, K. D., Muglia, P., Francks, C., Matthews, P. M., Gylfason, A., Halldorsson, B. V., Gudbjartsson, D., Thorgeirsson, T. E., Sigurdsson, A. and 55 moreStefansson, H., Rujescu, D., Cichon, S., Pietilainen, O. P. H., Ingason, A., Steinberg, S., Fossdal, R., Sigurdsson, E., Sigmundsson, T., Buizer-Voskamp, J. E., Hansen, T., Jakobsen, K. D., Muglia, P., Francks, C., Matthews, P. M., Gylfason, A., Halldorsson, B. V., Gudbjartsson, D., Thorgeirsson, T. E., Sigurdsson, A., Jonasdottir, A., Jonasdottir, A., Bjornsson, A., Mattiasdottir, S., Blondal, T., Haraldsson, M., Magnusdottir, B. B., Giegling, I., Möller, H.-J., Hartmann, A., Shianna, K. V., Ge, D., Need, A. C., Crombie, C., Fraser, G., Walker, N., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Paunio, T., Toulopoulou, T., Bramon, E., Forti, M. D., Murray, R., Ruggeri, M., Vassos, E., Tosato, S., Walshe, M., Li, T., Vasilescu, C., Muhleisen, T. W., Wang, A. G., Ullum, H., Djurovic, S., Melle, I., Olesen, J., Kiemeney, L. A., Franke, B., Sabatti, C., Freimer, N. B., Gulcher, J. R., Thorsteinsdottir, U., Kong, A., Andreassen, O. A., Ophoff, R. A., Georgi, A., Rietschel, M., Werge, T., Petursson, H., Goldstein, D. B., Nothen, M. M., Peltonen, L., Collier, D. A., St. Clair, D., & Stefansson, K. (2008). Large recurrent microdeletions associated with schizophrenia [Letter to Nature]. Nature, 455(7210), 232-236. doi:10.1038/nature07229.

    Abstract

    Reduced fecundity, associated with severe mental disorders, places negative selection pressure on risk alleles and may explain, in part, why common variants have not been found that confer risk of disorders such as autism, schizophrenia and mental retardation. Thus, rare variants may account for a larger fraction of the overall genetic risk than previously assumed. In contrast to rare single nucleotide mutations, rare copy number variations (CNVs) can be detected using genome-wide single nucleotide polymorphism arrays. This has led to the identification of CNVs associated with mental retardation and autism. In a genome-wide search for CNVs associating with schizophrenia, we used a population-based sample to identify de novo CNVs by analysing 9,878 transmissions from parents to offspring. The 66 de novo CNVs identified were tested for association in a sample of 1,433 schizophrenia cases and 33,250 controls. Three deletions at 1q21.1, 15q11.2 and 15q13.3 showing nominal association with schizophrenia in the first sample (phase I) were followed up in a second sample of 3,285 cases and 7,951 controls (phase II). All three deletions significantly associate with schizophrenia and related psychoses in the combined sample. The identification of these rare, recurrent risk variants, having occurred independently in multiple founders and being subject to negative selection, is important in itself. CNV analysis may also point the way to the identification of additional and more prevalent risk variants in genes and pathways involved in schizophrenia.

    Additional information

    Suppl.Material.pdf
  • Stehouwer, H., & Van den Bosch, A. (2008). Putting the t where it belongs: Solving a confusion problem in Dutch. In S. Verberne, H. Van Halteren, & P.-A. Coppen (Eds.), Computational Linguistics in the Netherlands 2007: Selected Papers from the 18th CLIN Meeting (pp. 21-36). Utrecht: LOT.

    Abstract

    A common Dutch writing error is to confuse a word ending in -d with a neighbor word ending in -dt. In this paper we describe the development of a machine-learning-based disambiguator that can determine which word ending is appropriate, on the basis of its local context. We develop alternative disambiguators, varying between a single monolithic classifier and having multiple confusable experts disambiguate between confusable pairs. Disambiguation accuracy of the best developed disambiguators exceeds 99%; when we apply these disambiguators to an external test set of collected errors, our detection strategy correctly identifies up to 79% of the errors.
  • Stephens, S., Hartz, S., Hoft, N., Saccone, N., Corley, R., Hewitt, J., Hopfer, C., Breslau, N., Coon, H., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Han, Y., Hansel, N., Jiang, C., Korhonen, T., Lind, P., Liu, J. and 105 moreStephens, S., Hartz, S., Hoft, N., Saccone, N., Corley, R., Hewitt, J., Hopfer, C., Breslau, N., Coon, H., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Han, Y., Hansel, N., Jiang, C., Korhonen, T., Lind, P., Liu, J., Michel, M., Lyytikäinen, L.-P., Shaffer, J., Short, S., Sun, J., Teumer, A., Thompson, J., Vogelzangs, N., Vink, J., Wenzlaff, A., Wheeler, W., Yang, B.-Z., Aggen, S., Balmforth, A., Baumesiter, S., Beaty, T., Benjamin, D., Bergen, A., Broms, U., Cesarini, D., Chatterjee, N., Chen, J., Cheng, Y.-C., Cichon, S., Couper, D., Cucca, F., Dick, D., Foround, T., Furberg, H., Giegling, I., Gillespie, N., Gu, F.,.Hall, A., Hällfors, J., Han, S., Hartmann, A., Heikkilä, K., Hickie, I., Hottenga, J., Jousilahti, P., Kaakinen, M., Kähönen, M., Koellinger, P., Kittner, S., Konte, B., Landi, M.-T., Laatikainen, T., Leppert, M., Levy, S., Mathias, R., McNeil, D., Medlund, S., Montgomery, G., Murray, T., Nauck, M., North, K., Paré, P., Pergadia, M., Ruczinski, I., Salomaa, V., Viikari, J., Willemsen, G., Barnes, K., Boerwinkle, E., Boomsma, D., Caporaso, N., Edenberg, H., Francks, C., Gelernter, J., Grabe, H., Hops, H., Jarvelin, M.-R., Johannesson, M., Kendler, K., Lehtimäki, T., Magnusson, P., Marazita, M., Marchini, J., Mitchell, B., Nöthen, M., Penninx, B., Raitakari, O., Rietschel, M., Rujescu, D., Samani, N., Schwartz, A., Shete, S., Spitz, M., Swan, G., Völzke, H., Veijola, J., Wei, Q., Amos, C., Canon, D., Grucza, R., Hatsukami, D., Heath, A., Johnson, E., Kaprio, J., Madden, P., Martin, N., Stevens, V., Weiss, R., Kraft, P., Bierut, L., & Ehringer, M. (2013). Distinct Loci in the CHRNA5/CHRNA3/CHRNB4 Gene Cluster are Associated with Onset of Regular Smoking. Genetic Epidemiology, 37, 846-859. doi:10.1002/gepi.21760.

    Abstract

    Neuronal nicotinic acetylcholine receptor (nAChR) genes (CHRNA5/CHRNA3/CHRNB4) have been reproducibly associated with nicotine dependence, smoking behaviors, and lung cancer risk. Of the few reports that have focused on early smoking behaviors, association results have been mixed. This meta-analysis examines early smoking phenotypes and SNPs in the gene cluster to determine: (1) whether the most robust association signal in this region (rs16969968) for other smoking behaviors is also associated with early behaviors, and/or (2) if additional statistically independent signals are important in early smoking. We focused on two phenotypes: age of tobacco initiation (AOI) and age of first regular tobacco use (AOS). This study included 56,034 subjects (41 groups) spanning nine countries and evaluated five SNPs including rs1948, rs16969968, rs578776, rs588765, and rs684513. Each dataset was analyzed using a centrally generated script. Meta-analyses were conducted from summary statistics. AOS yielded significant associations with SNPs rs578776 (beta = 0.02, P = 0.004), rs1948 (beta = 0.023, P = 0.018), and rs684513 (beta = 0.032, P = 0.017), indicating protective effects. There were no significant associations for the AOI phenotype. Importantly, rs16969968, the most replicated signal in this region for nicotine dependence, cigarettes per day, and cotinine levels, was not associated with AOI (P = 0.59) or AOS (P = 0.92). These results provide important insight into the complexity of smoking behavior phenotypes, and suggest that association signals in the CHRNA5/A3/B4 gene cluster affecting early smoking behaviors may be different from those affecting the mature nicotine dependence phenotype

    Files private

    Request files
  • Stewart, L., Verdonschot, R. G., Nasralla, P., & Lanipekun, J. (2013). Action–perception coupling in pianists: Learned mappings or spatial musical association of response codes (SMARC) effect? Quarterly Journal of Experimental Psychology, 66(1), 37-50. doi:10.1080/17470218.2012.687385.

    Abstract

    The principle of common coding suggests that a joint representation is formed when actions are repeatedly paired with a specific perceptual event. Musicians are occupationally specialized with regard to the coupling between actions and their auditory effects. In the present study, we employed a novel paradigm to demonstrate automatic action–effect associations in pianists. Pianists and nonmusicians pressed keys according to aurally presented number sequences. Numbers were presented at pitches that were neutral, congruent, or incongruent with respect to pitches that would normally be produced by such actions. Response time differences were seen between congruent and incongruent sequences in pianists alone. A second experiment was conducted to determine whether these effects could be attributed to the existence of previously documented spatial/pitch compatibility effects. In a “stretched” version of the task, the pitch distance over which the numbers were presented was enlarged to a range that could not be produced by the hand span used in Experiment 1. The finding of a larger response time difference between congruent and incongruent trials in the original, standard, version compared with the stretched version, in pianists, but not in nonmusicians, indicates that the effects obtained are, at least partially, attributable to learned action effects.
  • Stivers, T. (2008). Stance, alignment, and affiliation during storytelling: When nodding is a token of affiliation. Research on Language and Social Interaction, 41(1), 31-57. doi:10.1080/08351810701691123.

    Abstract

    Through stories, tellers communicate their stance toward what they are reporting. Story recipients rely on different interactional resources to display alignment with the telling activity and affiliation with the teller's stance. In this article, I examine the communication resources participants to tellings rely on to manage displays of alignment and affiliation during the telling. The primary finding is that whereas vocal continuers simply align with the activity in progress, nods also claim access to the teller's stance toward the events (whether directly or indirectly). In mid-telling, when a recipient nods, she or he claims to have access to the teller's stance toward the event being reported, which in turn conveys preliminary affiliation with the teller's position and that the story is on track toward preferred uptake at story completion. Thus, the concepts of structural alignment and social affiliation are separate interactional issues and are managed by different response tokens in the mid-telling sequential environment.
  • Stivers, T. (1998). Prediagnostic commentary in veterinarian-client interaction. Research on Language and Social Interaction, 31(2), 241-277. doi:10.1207/s15327973rlsi3102_4.
  • Stivers, T., & Sidnell, J. (Eds.). (2013). The handbook on conversation analysis. Malden, MA: Wiley-Blackwell.

    Abstract

    Presenting a comprehensive, state-of-the-art overview of theoretical and descriptive research in the field, The Handbook of Conversation Analysis brings together contributions by leading international experts to provide an invaluable information resource and reference for scholars of social interaction across the areas of conversation analysis, discourse analysis, linguistic anthropology, interpersonal communication, discursive psychology and sociolinguistics. Ideal as an introduction to the field for upper level undergraduates and as an in-depth review of the latest developments for graduate level students and established scholars Five sections outline the history and theory, methods, fundamental concepts, and core contexts in the study of conversation, as well as topics central to conversation analysis Written by international conversation analysis experts, the book covers a wide range of topics and disciplines, from reviewing underlying structures of conversation, to describing conversation analysis' relationship to anthropology, communication, linguistics, psychology, and sociology
  • Stivers, T., Chalfoun, A., & Rossi, G. (2024). To err is human but to persist is diabolical: Toward a theory of interactional policing. Frontiers in Sociology: Sociological Theory, 9: 1369776. doi:10.3389/fsoc.2024.1369776.

    Abstract

    Social interaction is organized around norms and preferences that guide our construction of actions and our interpretation of those of others, creating a reflexive moral order. Sociological theory suggests two possibilities for the type of moral order that underlies the policing of interactional norm and preference violations: a morality that focuses on the nature of violations themselves and a morality that focuses on the positioning of actors as they maintain their conduct comprehensible, even when they depart from norms and preferences. We find that actors are more likely to reproach interactional violations for which an account is not provided by the transgressor, and that actors weakly reproach or let pass first offenses while more strongly policing violators who persist in bad behavior. Based on these findings, we outline a theory of interactional policing that rests not on the nature of the violation but rather on actors' moral positioning.
  • Stolk, A., Verhagen, L., Schoffelen, J.-M., Oostenveld, R., Blokpoel, M., Hagoort, P., van Rooij, I., & Tonia, I. (2013). Neural mechanisms of communicative innovation. Proceedings of the National Academy of Sciences of the United States of America, 110(36), 14574-14579. doi:10.1073/pnas.1303170110.

    Abstract

    Human referential communication is often thought as coding-decoding a set of symbols, neglecting that establishing shared meanings requires a computational mechanism powerful enough to mutually negotiate them. Sharing the meaning of a novel symbol might rely on similar conceptual inferences across communicators or on statistical similarities in their sensorimotor behaviors. Using magnetoencephalography, we assess spectral, temporal, and spatial characteristics of neural activity evoked when people generate and understand novel shared symbols during live communicative interactions. Solving those communicative problems induced comparable changes in the spectral profile of neural activity of both communicators and addressees. This shared neuronal up-regulation was spatially localized to the right temporal lobe and the ventromedial prefrontal cortex and emerged already before the occurrence of a specific communicative problem. Communicative innovation relies on neuronal computations that are shared across generating and understanding novel shared symbols, operating over temporal scales independent from transient sensorimotor behavior.
  • Stolk, A., Todorovic, A., Schoffelen, J.-M., & Oostenveld, R. (2013). Online and offline tools for head movement compensation in MEG. NeuroImage, 68, 39-48. doi:10.1016/j.neuroimage.2012.11.047.

    Abstract

    Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Here we describe two novel and readily applicable methods that compensate for the detrimental effects of head motion on the statistical sensitivity of MEG experiments. First, we introduce an online procedure that continuously monitors head position. Second, we describe an offline analysis method that takes into account the head position time-series. We quantify the performance of these methods in the context of three different experimental settings, involving somatosensory, visual and auditory stimuli, assessing both individual and group-level statistics. The online head localization procedure allowed for optimal repositioning of the subjects over multiple sessions, resulting in a 28% reduction of the variance in dipole position and an improvement of up to 15% in statistical sensitivity. Offline incorporation of the head position time-series into the general linear model resulted in improvements of group-level statistical sensitivity between 15% and 29%. These tools can substantially reduce the influence of head movement within and between sessions, increasing the sensitivity of many cognitive neuroscience experiments.
  • Stolker, C. J. J. M., & Poletiek, F. H. (1998). Smartengeld - Wat zijn we eigenlijk aan het doen? Naar een juridische en psychologische evaluatie. In F. Stadermann (Ed.), Bewijs en letselschade (pp. 71-86). Lelystad, The Netherlands: Koninklijke Vermande.
  • Sumer, B., Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Acquisition of locative expressions in children learning Turkish Sign Language (TİD) and Turkish. In E. Arik (Ed.), Current directions in Turkish Sign Language research (pp. 243-272). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    In sign languages, where space is often used to talk about space, expressions of spatial relations (e.g., ON, IN, UNDER, BEHIND) may rely on analogue mappings of real space onto signing space. In contrast, spoken languages express space in mostly categorical ways (e.g. adpositions). This raises interesting questions about the role of language modality in the acquisition of expressions of spatial relations. However, whether and to what extent modality influences the acquisition of spatial language is controversial – mostly due to the lack of direct comparisons of Deaf children to Deaf adults and to age-matched hearing children in similar tasks. Furthermore, the previous studies have taken English as the only model for spoken language development of spatial relations.
    Therefore, we present a balanced study in which spatial expressions by deaf and hearing children in two different age-matched groups (preschool children and school-age children) are systematically compared, as well as compared to the spatial expressions of adults. All participants performed the same tasks, describing angular (LEFT, RIGHT, FRONT, BEHIND) and non-angular spatial configurations (IN, ON, UNDER) of different objects (e.g. apple in box; car behind box).
    The analysis of the descriptions with non-angular spatial relations does not show an effect of modality on the development of
    locative expressions in TİD and Turkish. However, preliminary results of the analysis of expressions of angular spatial relations suggest that signers provide angular information in their spatial descriptions
    more frequently than Turkish speakers in all three age groups, and thus showing a potentially different developmental pattern in this domain. Implications of the findings with regard to the development of relations in spatial language and cognition will be discussed.
  • Sumner, M., Kurumada, C., Gafter, R., & Casillas, M. (2013). Phonetic variation and the recognition of words with pronunciation variants. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 3486-3492). Austin, TX: Cognitive Science Society.
  • Suppes, P., Böttner, M., & Liang, L. (1998). Machine Learning of Physics Word Problems: A Preliminary Report. In A. Aliseda, R. van Glabbeek, & D. Westerståhl (Eds.), Computing Natural Language (pp. 141-154). Stanford, CA, USA: CSLI Publications.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1998). Understanding ambiguous words in sentence contexts: Electrophysiological evidence for delayed contextual selection in Broca's aphasia. Neuropsychologia, 36(8), 737-761. doi:10.1016/S0028-3932(97)00174-7.

    Abstract

    This study investigates whether spoken sentence comprehension deficits in Broca's aphasics results from their inability to access the subordinate meaning of ambiguous words (e.g. bank), or alternatively, from a delay in their selection of the contextually appropriate meaning. Twelve Broca's aphasics and twelve elderly controls were presented with lexical ambiguities in three context conditions, each followed by the same target words. In the concordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was related to the target. In the discordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was incompatible with the target.In the unrelated condition, the sentence-final word was unambiguous and unrelated to the target. The task of the subjects was to listen attentively to the stimuli The activational status of the ambiguous sentence-final words was inferred from the amplitude of the N399 to the targets at two inter-stimulus intervals (ISIs) (100 ms and 1250 ms). At the short ISI, the Broca's aphasics showed clear evidence of activation of the subordinate meaning. In contrast to elderly controls, however, the Broca's aphasics were not successful at selecting the appropriate meaning of the ambiguity in the short ISI version of the experiment. But at the long ISI, in accordance with the performance of the elderly controls, the patients were able to successfully complete the contextual selection process. These results indicate that Broca's aphasics are delayed in the process of contextual selection. It is argued that this finding of delayed selection is compatible with the idea that comprehension deficits in Broca's aphasia result from a delay in the process of integrating lexical information.
  • Swift, M. (1998). [Book review of LOUIS-JACQUES DORAIS, La parole inuit: Langue, culture et société dans l'Arctique nord-américain]. Language in Society, 27, 273-276. doi:10.1017/S0047404598282042.

    Abstract

    This volume on Inuit speech follows the evolution of a native language of the North American Arctic, from its historical roots to its present-day linguistic structure and patterns of use from Alaska to Greenland. Drawing on a wide range of research from the fields of linguistics, anthropology, and sociology, Dorais integrates these diverse perspectives in a comprehensive view of native language development, maintenance, and use under conditions of marginalization due to social transition.
  • Takashima, A., Carota, F., Schoots, V., Redmann, A., Jehee, J., & Indefrey, P. (2024). Tomatoes are red: The perception of achromatic objects elicits retrieval of associated color knowledge. Journal of Cognitive Neuroscience, 36(1), 24-45. doi:10.1162/jocn_a_02068.

    Abstract

    When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri (“Human V4”) correlated with a representational model encoding the red–green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named.
  • Tamaoka, K., Yu, S., Zhang, J., Otsuka, Y., Lim, H., Koizumi, M., & Verdonschot, R. G. (2024). Syntactic structures in motion: Investigating word order variations in verb-final (Korean) and verb-initial (Tongan) languages. Frontiers in Psychology, 15: 1360191. doi:10.3389/fpsyg.2024.1360191.

    Abstract

    This study explored sentence processing in two typologically distinct languages: Korean, a verb-final language, and Tongan, a verb-initial language. The first experiment revealed that in Korean, sentences arranged in the scrambled OSV (Object, Subject, Verb) order were processed more slowly than those in the canonical SOV order, highlighting a scrambling effect. It also found that sentences with subject topicalization in the SOV order were processed as swiftly as those in the canonical form, whereas sentences with object topicalization in the OSV order were processed with speeds and accuracy comparable to scrambled sentences. However, since topicalization and scrambling in Korean use the same OSV order, independently distinguishing the effects of topicalization is challenging. In contrast, Tongan allows for a clear separation of word orders for topicalization and scrambling, facilitating an independent evaluation of topicalization effects. The second experiment, employing a maze task, confirmed that Tongan’s canonical VSO order was processed more efficiently than the VOS scrambled order, thereby verifying a scrambling effect. The third experiment investigated the effects of both scrambling and topicalization in Tongan, finding that the canonical VSO order was processed most efficiently in terms of speed and accuracy, unlike the VOS scrambled and SVO topicalized orders. Notably, the OVS object-topicalized order was processed as efficiently as the VSO canonical order, while the SVO subject-topicalized order was slower than VSO but faster than VOS. By independently assessing the effects of topicalization apart from scrambling, this study demonstrates that both subject and object topicalization in Tongan facilitate sentence processing, contradicting the predictions based on movement-based anticipation.

    Additional information

    appendix 1-3
  • Tan, Y., Martin, R. C., & Van Dyke, J. (2013). Verbal WM capacities in sentence comprehension: Evidence from aphasia. Procedia - Social and Behavioral Sciences, 94, 108-109. doi:10.1016/j.sbspro.2013.09.052.
  • Taylor, L. J., Lev-Ari, S., & Zwaan, R. A. (2008). Inferences about action engage action systems. Brain and Language, 107(1), 62-67. doi:10.1016/j.bandl.2007.08.004.

    Abstract

    Verbal descriptions of actions activate compatible motor responses [Glenberg, A. M., & Kaschak, M. P. (2002). Grounding language in action. Psychonomic Bulletin & Review, 9, 558–565]. Previous studies have found that the motor processes for manual rotation are engaged in a direction-specific manner when a verb disambiguates the direction of rotation [e.g. “unscrewed;” Zwaan, R. A., & Taylor, L. (2006). Seeing, acting, understanding: Motor resonance in language comprehension. Journal of Experimental Psychology: General, 135, 1–11]. The present experiment contributes to this body of work by showing that verbs that leave direction ambiguous (e.g. “turned”) do not necessarily yield such effects. Rather, motor resonance is associated with a word that disambiguates some element of an action, as meaning is being integrated across sentences. The findings are discussed within the context of discourse processes, inference generation, motor activation, and mental simulation.
  • Ten Oever, S., Sack, A. T., Wheat, K. L., Bien, N., & Van Atteveldt, N. (2013). Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs. Frontiers in Psychology, 4: 331. doi:10.3389/fpsyg.2013.00331.

    Abstract

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
  • Ten Bosch, L., Boves, L., & Ernestus, M. (2013). Towards an end-to-end computational model of speech comprehension: simulating a lexical decision task. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 2822-2826).

    Abstract

    This paper describes a computational model of speech comprehension that takes the acoustic signal as input and predicts reaction times as observed in an auditory lexical decision task. By doing so, we explore a new generation of end-to-end computational models that are able to simulate the behaviour of human subjects participating in a psycholinguistic experiment. So far, nearly all computational models of speech comprehension do not start from the speech signal itself, but from abstract representations of the speech signal, while the few existing models that do start from the acoustic signal cannot directly model reaction times as obtained in comprehension experiments. The main functional components in our model are the perception stage, which is compatible with the psycholinguistic model Shortlist B and is implemented with techniques from automatic speech recognition, and the decision stage, which is based on the linear ballistic accumulation decision model. We successfully tested our model against data from 20 participants performing a largescale auditory lexical decision experiment. Analyses show that the model is a good predictor for the average judgment and reaction time for each word.
  • Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.

    Abstract

    From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
  • Ten Oever, S., Titone, L., te Rietmolen, N., & Martin, A. E. (2024). Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. Proceedings of the National Academy of Sciences of the United States of America, 121(3): e2320489121. doi:10.1073/pnas.2320489121.

    Abstract

    Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
  • Tendolkar, I., Arnold, J., Petersson, K. M., Weis, S., Brockhaus-Dumke, A., Van Eijndhoven, P., Buitelaar, J., & Fernandez, G. (2008). Contributions of the medial temporal lobe to declarative memory retrieval: Manipulating the amount of contextual retrieval. Learning and Memory, 15(9), 611-617. doi:10.1101/lm.916708.

    Abstract

    We investigated how the hippocampus and its adjacent mediotemporal structures contribute to contextual and noncontextual declarative memory retrieval by manipulating the amount of contextual information across two levels of the same contextual dimension in a source memory task. A first analysis identified medial temporal lobe (MTL) substructures mediating either contextual or noncontextual retrieval. A linearly weighted analysis elucidated which MTL substructures show a gradually increasing neural activity, depending on the amount of contextual information retrieved. A hippocampal engagement was found during both levels of source memory but not during item memory retrieval. The anterior MTL including the perirhinal cortex was only engaged during item memory retrieval by an activity decrease. Only the posterior parahippocampal cortex showed an activation increasing with the amount of contextual information retrieved. If one assumes a roughly linear relationship between the blood-oxygenation level-dependent (BOLD) signal and the associated cognitive process, our results suggest that the posterior parahippocampal cortex is involved in contextual retrieval on the basis of memory strength while the hippocampus processes representations of item-context binding. The anterior MTL including perirhinal cortex seems to be particularly engaged in familiarity-based item recognition. If one assumes departure from linearity, however, our results can also be explained by one-dimensional modulation of memory strength.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.

    Abstract

    Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity.
  • Terrill, A., & Burenhult, N. (2008). Orientation as a strategy of spatial reference. Studies in Language, 32(1), 93-136. doi:10.1075/sl.32.1.05ter.

    Abstract

    This paper explores a strategy of spatial expression which utilizes orientation, a way of describing the spatial relationship of entities by means of reference to their facets. We present detailed data and analysis from two languages, Jahai (Mon-Khmer, Malay Peninsula) and Lavukaleve (Papuan isolate, Solomon Islands), and supporting data from five more languages, to show that the orientation strategy is a major organizing principle in these languages. This strategy has not previously been recognized in the literature as a unitary phenomenon, and the languages which employ it present particular challenges to existing typologies of spatial frames of reference.
  • Terrill, A. (1998). Biri. München: Lincom Europa.

    Abstract

    This work presents a salvage grammar of the Biri language of Eastern Central Queensland, a Pama-Nyungan language belonging to the large Maric subgroup. As the language is no longer used, the grammatical description is based on old written sources and on recordings made by linguists in the 1960s and 1970s. Biri is in many ways typical of the Pama-Nyungan languages of Southern Queensland. It has split case marking systems, marking nouns according to an ergative/absolutive system and pronouns according to a nominative/accusative system. Unusually for its area, Biri also has bound pronouns on its verb, cross-referencing the person, number and case of core participants. As far as it is possible, the grammatical discussion is ‘theory neutral’. The first four chapters deal with the phonology, morphology, and syntax of the language. The last two chapters contain a substantial discussion of Biri’s place in the Pama-Nyungan family. In chapter 6 the numerous dialects of the Biri language are discussed. In chapter 7 the close linguistic relationship between Biri and the surrounding languages is examined.
  • Thompson-Schill, S., Hagoort, P., Dominey, P. F., Honing, H., Koelsch, S., Ladd, D. R., Lerdahl, F., Levinson, S. C., & Steedman, M. (2013). Multiple levels of structure in language and music. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 289-303). Cambridge, MA: MIT Press.

    Abstract

    A forum devoted to the relationship between music and language begins with an implicit assumption: There is at least one common principle that is central to all human musical systems and all languages, but that is not characteristic of (most) other domains. Why else should these two categories be paired together for analysis? We propose that one candidate for a common principle is their structure. In this chapter, we explore the nature of that structure—and its consequences for psychological and neurological processing mechanisms—within and across these two domains.
  • Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.

    Abstract

    Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.

    Additional information

    supplementary information
  • Timmer, K., Ganushchak, L. Y., Mitlina, Y., & Schiller, N. O. (2013). Choosing first or second language phonology in 125 ms [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 164.

    Abstract

    We are often in a bilingual situation (e.g., overhearing a conversation in the train). We investigated whether first (L1) and second language (L2) phonologies are automatically activated. A masked priming paradigm was used, with Russian words as targets and either Russian or English words as primes. Event-related potentials (ERPs) were recorded while Russian (L1) – English (L2) bilinguals read aloud L1 target words (e.g. РЕЙС /reis/ ‘fl ight’) primed with either L1 (e.g. РАНА /rana/ ‘wound’) or L2 words (e.g. PACK). Target words were read faster when they were preceded by phonologically related L1 primes but not by orthographically related L2 primes. ERPs showed orthographic priming in the 125-200 ms time window. Thus, both L1 and L2 phonologies are simultaneously activated during L1 reading. The results provide support for non-selective models of bilingual reading, which assume automatic activation of the non-target language phonology even when it is not required by the task.
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Toni, I., De Lange, F. P., Noordzij, M. L., & Hagoort, P. (2008). Language beyond action. Journal of Physiology, 102, 71-79. doi:10.1016/j.jphysparis.2008.03.005.

    Abstract

    The discovery of mirror neurons in macaques and of a similar system in humans has provided a new and fertile neurobiological ground for rooting a variety of cognitive faculties. Automatic sensorimotor resonance has been invoked as the key elementary process accounting for disparate (dys)functions, like imitation, ideomotor apraxia, autism, and schizophrenia. In this paper, we provide a critical appraisal of three of these claims that deal with the relationship between language and the motor system. Does language comprehension require the motor system? Was there an evolutionary switch from manual gestures to speech as the primary mode of language? Is human communication explained by automatic sensorimotor resonances? A positive answer to these questions would open the tantalizing possibility of bringing language and human communication within the fold of the motor system. We argue that the available empirical evidence does not appear to support these claims, and their theoretical scope fails to account for some crucial features of the phenomena they are supposed to explain. Without denying the enormous importance of the discovery of mirror neurons, we highlight the limits of their explanatory power for understanding language and communication.
  • Tornero, D., Wattananit, S., Madsen, M. G., Koch, P., Wood, J., Tatarishvili, J., Mine, Y., Ge, R., Monni, E., Devaraju, K., Hevner, R. F., Bruestle, O., Lindval, O., & Kokaia, Z. (2013). Human induced pluripotent stem cell-derived cortical neurons integrate in stroke-injured cortex and improve functional recovery. Brain, 136(12), 3561-3577. doi:10.1093/brain/awt278.

    Abstract

    Stem cell-based approaches to restore function after stroke through replacement of dead neurons require the generation of specific neuronal subtypes. Loss of neurons in the cerebral cortex is a major cause of stroke-induced neurological deficits in adult humans. Reprogramming of adult human somatic cells to induced pluripotent stem cells is a novel approach to produce patient-specific cells for autologous transplantation. Whether such cells can be converted to functional cortical neurons that survive and give rise to behavioural recovery after transplantation in the stroke-injured cerebral cortex is not known. We have generated progenitors in vitro, expressing specific cortical markers and giving rise to functional neurons, from long-term self-renewing neuroepithelial-like stem cells, produced from adult human fibroblast-derived induced pluripotent stem cells. At 2 months after transplantation into the stroke-damaged rat cortex, the cortically fated cells showed less proliferation and more efficient conversion to mature neurons with morphological and immunohistochemical characteristics of a cortical phenotype and higher axonal projection density as compared with non-fated cells. Pyramidal morphology and localization of the cells expressing the cortex-specific marker TBR1 in a certain layered pattern provided further evidence supporting the cortical phenotype of the fated, grafted cells, and electrophysiological recordings demonstrated their functionality. Both fated and non-fated cell-transplanted groups showed bilateral recovery of the impaired function in the stepping test compared with vehicle-injected animals. The behavioural improvement at this early time point was most likely not due to neuronal replacement and reconstruction of circuitry. At 5 months after stroke in immunocompromised rats, there was no tumour formation and the grafted cells exhibited electrophysiological properties of mature neurons with evidence of integration in host circuitry. Our findings show, for the first time, that human skin-derived induced pluripotent stem cells can be differentiated to cortical neuronal progenitors, which survive, differentiate to functional neurons and improve neurological outcome after intracortical implantation in a rat stroke model.
  • Trilsbeek, P., Broeder, D., Van Valkenhoef, T., & Wittenburg, P. (2008). A grid of regional language archives. In C. Calzolari (Ed.), Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008) (pp. 1474-1477). European Language Resources Association (ELRA).

    Abstract

    About two years ago, the Max Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, started an initiative to install regional language archives in various places around the world, particularly in places where a large number of endangered languages exist and are being documented. These digital archives make use of the LAT archiving framework [1] that the MPI has developed
    over the past nine years. This framework consists of a number of web-based tools for depositing, organizing and utilizing linguistic resources in a digital archive. The regional archives are in principle autonomous archives, but they can decide to share metadata descriptions and language resources with the MPI archive in Nijmegen and become part of a grid of linked LAT archives. By doing so, they will also take advantage of the long-term preservation strategy of the MPI archive. This paper describes the reasoning
    behind this initiative and how in practice such an archive is set up.
  • Trujillo, J. P. (2024). Motion-tracking technology for the study of gesture. In A. Cienki (Ed.), The Cambridge Handbook of Gesture Studies. Cambridge: Cambridge University Press.
  • Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.

    Abstract

    Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.

    Additional information

    41598_2024_52589_MOESM1_ESM.docx
  • Trujillo, J. P., & Holler, J. (2024). Information distribution patterns in naturalistic dialogue differ across languages. Psychonomic Bulletin & Review, 31, 1723-1734. doi:10.3758/s13423-024-02452-0.

    Abstract

    The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing–planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e.,time between speaker turns)

    Additional information

    Data availability
  • Tsuji, S., & Cristia, A. (2013). Fifty years of infant vowel discrimination research: What have we learned? Journal of the Phonetic Society of Japan, 17(3), 1-11.
  • Turco, G., Dimroth, C., & Braun, B. (2013). Intonational means to mark verum focus in German and French. Language and Speech., 56(4), 461-491. doi:10.1177/0023830912460506.

    Abstract

    German and French differ in a number of aspects. Regarding the prosody-pragmatics interface, German is said to have a direct focus-to-accent mapping, which is largely absent in French – owing to strong structural constraints. We used a semi-spontaneous dialogue setting to investigate the intonational marking of Verum Focus, a focus on the polarity of an utterance in the two languages (e.g. the child IS tearing the banknote as an opposite claim to the child is not tearing the banknote). When Verum Focus applies to auxiliaries, pragmatic aspects (i.e. highlighting the contrast) directly compete with structural constraints (e.g. avoiding an accent on phonologically weak elements such as monosyllabic function words). Intonational analyses showed that auxiliaries were predominantly accented in German, as expected. Interestingly, we found a high number of (as yet undocumented) focal accents on phrase-initial auxiliaries in French Verum Focus contexts. When French accent patterns were equally distributed across information structural contexts, relative prominence (in terms of peak height) between initial and final accents was shifted towards initial accents in Verum Focus compared to non-Verum Focus contexts. Our data hence suggest that French also may mark Verum Focus by focal accents but that this tendency is partly overridden by strong structural constraints.
  • Uddén, J., Folia, V., Forkstam, C., Ingvar, M., Fernández, G., Overeem, S., Van Elswijk, G., Hagoort, P., & Petersson, K. M. (2008). The inferior frontal cortex in artificial syntax processing: An rTMS study. Brain Research, 1224, 69-78. doi:10.1016/j.brainres.2008.05.070.

    Abstract

    The human capacity to implicitly acquire knowledge of structured sequences has recently been investigated in artificial grammar learning using functional magnetic resonance imaging. It was found that the left inferior frontal cortex (IFC; Brodmann's area (BA) 44/45) was related to classification performance. The objective of this study was to investigate whether the IFC (BA 44/45) is causally related to classification of artificial syntactic structures by means of an off-line repetitive transcranial magnetic stimulation (rTMS) paradigm. We manipulated the stimulus material in a 2 × 2 factorial design with grammaticality status and local substring familiarity as factors. The participants showed a reliable effect of grammaticality on classification of novel items after 5days of exposure to grammatical exemplars without performance feedback in an implicit acquisition task. The results show that rTMS of BA 44/45 improves syntactic classification performance by increasing the rejection rate of non-grammatical items and by shortening reaction times of correct rejections specifically after left-sided stimulation. A similar pattern of results is observed in FMRI experiments on artificial syntactic classification. These results suggest that activity in the inferior frontal region is causally related to artificial syntax processing.
  • Ullman, M. T., Bulut, T., & Walenski, M. (2024). Hijacking limitations of working memory load to test for composition in language. Cognition, 251: 105875. doi:10.1016/j.cognition.2024.105875.

    Abstract

    Although language depends on storage and composition, just what is stored or (de)composed remains unclear. We leveraged working memory load limitations to test for composition, hypothesizing that decomposed forms should particularly tax working memory. We focused on a well-studied paradigm, English inflectional morphology. We predicted that (compositional) regulars should be harder to maintain in working memory than (non-compositional) irregulars, using a 3-back production task. Frequency, phonology, orthography, and other potentially confounding factors were controlled for. Compared to irregulars, regulars and their accompanying −s/−ing-affixed filler items yielded more errors. Underscoring the decomposition of only regulars, regulars yielded more bare-stem (e.g., walk) and stem affixation errors (walks/walking) than irregulars, whereas irregulars yielded more past-tense-form affixation errors (broughts/tolded). In line with previous evidence that regulars can be stored under certain conditions, the regular-irregular difference held specifically for phonologically consistent (not inconsistent) regulars, in particular for both low and high frequency consistent regulars in males, but only for low frequency consistent regulars in females. Sensitivity analyses suggested the findings were robust. The study further elucidates the computation of inflected forms, and introduces a simple diagnostic for linguistic composition.

    Additional information

    Data availabillity
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Knowledge of a talker’s f0 affects subsequent perception of voiceless fricatives. In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 432-436).

    Abstract

    The human brain deals with the infinite variability of speech through multiple mechanisms. Some of them rely solely on information in the speech input (i.e., signal-driven) whereas some rely on linguistic or real-world knowledge (i.e., knowledge-driven). Many signal-driven perceptual processes rely on the enhancement of acoustic differences between incoming speech sounds, producing contrastive adjustments. For instance, when an ambiguous voiceless fricative is preceded by a high fundamental frequency (f0) sentence, the fricative is perceived as having lower a spectral center of gravity (CoG). However, it is not clear whether knowledge of a talker’s typical f0 can lead to similar contrastive effects. This study investigated a possible talker f0 effect on fricative CoG perception. In the exposure phase, two groups of participants (N=16 each) heard the same talker at high or low f0 for 20 minutes. Later, in the test phase, participants rated fixed-f0 /?ɔk/ tokens as being /sɔk/ (i.e., high CoG) or /ʃɔk/ (i.e., low CoG), where /?/ represents a fricative from a 5-step /s/-/ʃ/ continuum. Surprisingly, the data revealed the opposite of our contrastive hypothesis, whereby hearing high f0 instead biased perception towards high CoG. Thus, we demonstrated that talker f0 information affects fricative CoG perception.
  • Ünal, E., & Papafragou, A. (2013). Linguistic and conceptual representations of inference as a knowledge source. In S. Baiz, N. Goldman, & R. Hawkes (Eds.), Proceedings of the 37th Annual Boston University Conference on Language Development (BUCLD 37) (pp. 433-443). Boston: Cascadilla Press.
  • Van Berkum, J. J. A., Van den Brink, D., Tesink, C. M. J. Y., Kos, M., & Hagoort, P. (2008). The neural integration of speaker and message. Journal of Cognitive Neuroscience, 20(4), 580-591. doi:10.1162/jocn.2008.20054.

    Abstract

    When do listeners take into account who the speaker is? We asked people to listen to utterances whose content sometimes did not match inferences based on the identity of the speaker (e.g., “If only I looked like Britney Spears” in a male voice, or “I have a large tattoo on my back” spoken with an upper-class accent). Event-related brain responses revealed that the speaker's identity is taken into account as early as 200–300 msec after the beginning of a spoken word, and is processed by the same early interpretation mechanism that constructs sentence meaning based on just the words. This finding is difficult to reconcile with standard “Gricean” models of sentence interpretation in which comprehenders initially compute a local, context-independent meaning for the sentence (“semantics”) before working out what it really means given the wider communicative context and the particular speaker (“pragmatics”). Because the observed brain response hinges on voice-based and usually stereotype-dependent inferences about the speaker, it also shows that listeners rapidly classify speakers on the basis of their voices and bring the associated social stereotypes to bear on what is being said. According to our event-related potential results, language comprehension takes very rapid account of the social context, and the construction of meaning based on language alone cannot be separated from the social aspects of language use. The linguistic brain relates the message to the speaker immediately.
  • Van Berkum, J. J. A. (2008). Understanding sentences in context: What brain waves can tell us. Current Directions in Psychological Science, 17(6), 376-380. doi:10.1111/j.1467-8721.2008.00609.x.

    Abstract

    Language comprehension looks pretty easy. You pick up a novel and simply enjoy the plot, or ponder the human condition. You strike a conversation and listen to whatever the other person has to say. Although what you're taking in is a bunch of letters and sounds, what you really perceive—if all goes well—is meaning. But how do you get from one to the other so easily? The experiments with brain waves (event-related brain potentials or ERPs) reviewed here show that the linguistic brain rapidly draws upon a wide variety of information sources, including prior text and inferences about the speaker. Furthermore, people anticipate what might be said about whom, they use heuristics to arrive at the earliest possible interpretation, and if it makes sense, they sometimes even ignore the grammar. Language comprehension is opportunistic, proactive, and, above all, immediately context-dependent.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activitity during speaking: From syntax to phonology in 40 milliseconds. Science, 280, 572-574.

    Abstract

    In normal conversation, speakers translate thoughts into words at high speed. To enable this speed, the retrieval of distinct types of linguistic knowledge has to be orchestrated with millisecond precision. The nature of this orchestration is still largely unknown. This report presents dynamic measures of the real-time activation of two basic types of linguistic knowledge, syntax and phonology. Electrophysiological data demonstrate that during noun-phrase production speakers retrieve the syntactic gender of a noun before its abstract phonological properties. This two-step process operates at high speed: the data show that phonological information is already available 40 milliseconds after syntactic properties have been retrieved.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activity during speaking: From syntax to phonology in 40 milliseconds. Science, 280(5363), 572-574. doi:10.1126/science.280.5363.572.
  • Van Leeuwen, E. J. C., & Haun, D. B. M. (2013). Conformity in nonhuman primates: Fad or fact? Evolution and Human Behavior, 34, 1-7. doi:10.1016/j.evolhumbehav.2012.07.005.

    Abstract

    Majority influences have long been a subject of great interest for social psychologists and, more recently, for researchers investigating social influences in nonhuman primates. Although this empirical endeavor has culminated in the conclusion that some ape and monkey species show “conformist” tendencies, the current approach seems to suffer from two fundamental limitations: (a) majority influences have not been operationalized in accord with any of the existing definitions, thereby compromising the validity of cross-species comparisons, and (b) the results have not been systematically scrutinized in light of alternative explanations. In this review, we aim to address these limitations theoretically. First, we will demonstrate how the experimental designs used in nonhuman primate studies cannot test for conformity unambiguously and address alternative explanations and potential confounds for the presented results in the form of primacy effects, frequency exposure, and perception ambiguity. Second, we will show how majority influences have been defined differently across disciplines and, therefore, propose a set of definitions in order to streamline majority influence research, where conformist transmission and conformity will be put forth as operationalizations of the overarching denominator majority influences. Finally, we conclude with suggestions to foster the study of majority influences by clarifying the empirical scope of each proposed definition, exploring compatible research designs and highlighting how majority influences are inherently contingent on situational trade-offs.
  • Van den Bos, E., & Poletiek, F. H. (2008). Effects of grammar complexity on artificial grammar learning. Memory & Cognition, 36(6), 1122-1131. doi:10.3758/MC.36.6.1122.

    Abstract

    The present study identified two aspects of complexity that have been manipulated in the implicit learning literature and investigated how they affect implicit and explicit learning of artificial grammars. Ten finite state grammars were used to vary complexity. The results indicated that dependency length is more relevant to the complexity of a structure than is the number of associations that have to be learned. Although implicit learning led to better performance on a grammaticality judgment test than did explicit learning, it was negatively affected by increasing complexity: Performance decreased as there was an increase in the number of previous letters that had to be taken into account to determine whether or not the next letter was a grammatical continuation. In particular, the results suggested that implicit learning of higher order dependencies is hampered by the presence of longer dependencies. Knowledge of first-order dependencies was acquired regardless of complexity and learning mode.
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van Leeuwen, E. J. C., Cronin, K. A., Schütte, S., Call, J., & Haun, D. B. M. (2013). Chimpanzees flexibly adjust their behaviour in order to maximize payoffs, not to conform to majorities. PLoS One, 8(11): e80945. doi:10.1371/journal.pone.0080945.

    Abstract

    Chimpanzees have been shown to be adept learners, both individually and socially. Yet, sometimes their conservative nature seems to hamper the flexible adoption of superior alternatives, even to the extent that they persist in using entirely ineffective strategies. In this study, we investigated chimpanzees’ behavioural flexibility in two different conditions under which social animals have been predicted to abandon personal preferences and adopt alternative strategies: i) under influence of majority demonstrations (i.e. conformity), and ii) in the presence of superior reward contingencies (i.e. maximizing payoffs). Unlike previous nonhuman primate studies, this study disentangled the concept of conformity from the tendency to maintain one’s first-learned strategy. Studying captive (n=16) and semi-wild (n=12) chimpanzees in two complementary exchange paradigms, we found that chimpanzees did not abandon their behaviour in order to match the majority, but instead remained faithful to their first-learned strategy (Study 1a and 1b). However, the chimpanzees’ fidelity to their first-learned strategy was overridden by an experimental upgrade of the profitability of the alternative strategy (Study 2). We interpret our observations in terms of chimpanzees’ relative weighing of behavioural options as a function of situation-specific trade-offs. More specifically, contrary to previous findings, chimpanzees in our study abandoned their familiar behaviour to maximize payoffs, but not to conform to a majority.
  • Van Ooijen, B., Cutler, A., & Berinetto, P. M. (1993). Click detection in Italian and English. In Eurospeech 93: Vol. 1 (pp. 681-684). Berlin: ESCA.

    Abstract

    We report four experiments in which English and Italian monolinguals detected clicks in continous speech in their native language. Two of the experiments used an off-line location task, and two used an on-line reaction time task. Despite there being large differences between English and Italian with respect to rhythmic characteristics, very similar response patterns were found for the two language groups. It is concluded that the process of click detection operates independently from language-specific differences in perceptual processing at the sublexical level.
  • Van Beek, G., Flecken, M., & Starren, M. (2013). Aspectual perspective taking in event construal in L1 and L2 Dutch. International review of applied linguistics, 51(2), 199-227. doi:10.1515/iral-2013-0009.

    Abstract

    The present study focuses on the role of grammatical aspect in event construal and its function in encoding the specificity of an event. We investigate whether advanced L2 learners (L1 German) acquire target-like patterns of use of progressive aspect in Dutch, a language in which use of aspect depends on specific situation types. We analyze use of progressive markers and patterns in information selection, relating to specific features of agents or actions in dynamic event scenes. L2 event descriptions are compared with L1 Dutch and L1 German data. The L2 users display the complex situation-dependent patterns of use of aspect in Dutch, but they do not select the aspectual viewpoint (aan he construction) to the same extent as native speakers. Moreover, the encoding of specificity of the events (mentioning of specific agent and action features) reflects L1 transfer, as well as target-like performance in specific domains.
  • Van Putten, S. (2013). [Review of the book The expression of information structure. A documentation of its diversity across Africa, ed. by Ines Fiedler and Anne Schwarz]. Journal of African Languages and Linguistics, 34, 183 -186. doi:10.1515/jall-2013-0005.

    Abstract

    This volume contains 13 papers dealing with various aspects of information structure in a wide variety of African languages. They form the proceedings of a workshop organized by the Collaborative Research Center on Information Structure (University of Potsdam and Humboldt University, Berlin). In the introduction, the editors define the main contribution of this volume in terms of “the spectrum of information-structural notions and phenomena discussed, the investigation of information structure in several relatively unfamiliar languages and the genealogical width of the African languages studied.” (vii–viii emphasis added). In this sense it complements the previous volume on information structure in African languages published by the Collaborative Research Center and the University of Amsterdam (Aboh, Hartmann & Zimmermann, 2007), which was more theoryoriented.
  • Van der Zande, P. (2013). Hearing and seeing speech: Perceptual adjustments in auditory-visual speech processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van Heuven, W. J. B., Schriefers, H., Dijkstra, T., & Hagoort, P. (2008). Language conflict in the bilingual brain. Cerebral Cortex, 18(11), 2706-2716. doi:10.1093/cercor/bhn030.

    Abstract

    The large majority of humankind is more or less fluent in 2 or even more languages. This raises the fundamental question how the language network in the brain is organized such that the correct target language is selected at a particular occasion. Here we present behavioral and functional magnetic resonance imaging data showing that bilingual processing leads to language conflict in the bilingual brain even when the bilinguals’ task only required target language knowledge. This finding demonstrates that the bilingual brain cannot avoid language conflict, because words from the target and nontarget languages become automatically activated during reading. Importantly, stimulus-based language conflict was found in brain regions in the LIPC associated with phonological and semantic processing, whereas response-based language conflict was only found in the pre-supplementary motor area/anterior cingulate cortex when language conflict leads to response conflicts.
  • Van Uytvanck, D., Dukers, A., Ringersma, J., & Trilsbeek, P. (2008). Language-sites: Accessing and presenting language resources via geographic information systems. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, & D. Tapias (Eds.), Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008). Paris: European Language Resources Association (ELRA).

    Abstract

    The emerging area of Geographic Information Systems (GIS) has proven to add an interesting dimension to many research projects. Within the language-sites initiative we have brought together a broad range of links to digital language corpora and resources. Via Google Earth's visually appealing 3D-interface users can spin the globe, zoom into an area they are interested in and access directly the relevant language resources. This paper focuses on several ways of relating the map and the online data (lexica, annotations, multimedia recordings, etc.). Furthermore, we discuss some of the implementation choices that have been made, including future challenges. In addition, we show how scholars (both linguists and anthropologists) are using GIS tools to fulfill their specific research needs by making use of practical examples. This illustrates how both scientists and the general public can benefit from geography-based access to digital language data
  • Van der Valk, R. J. P., Duijts, L., Timpson, N. J., Salam, M. T., Standl, M., Curtin, J. A., Genuneit, J., Kerhof, M., Kreiner-Møller, E., Cáceres, A., Gref, A., Liang, L. L., Taal, H. R., Bouzigon, E., Demenais, F., Nadif, R., Ober, C., Thompson, E. E., Estrada, K., Hofman, A. and 39 moreVan der Valk, R. J. P., Duijts, L., Timpson, N. J., Salam, M. T., Standl, M., Curtin, J. A., Genuneit, J., Kerhof, M., Kreiner-Møller, E., Cáceres, A., Gref, A., Liang, L. L., Taal, H. R., Bouzigon, E., Demenais, F., Nadif, R., Ober, C., Thompson, E. E., Estrada, K., Hofman, A., Uitterlinden, A. G., van Duijn, C., Rivadeneira, F., Li, X., Eckel, S. P., Berhane, K., Gauderman, W. J., Granell, R., Evans, D. M., St Pourcain, B., McArdle, W., Kemp, J. P., Smith, G. D., Tiesler, C. M. T., Flexeder, C., Simpson, A., Murray, C. S., Fuchs, O., Postma, D. S., Bønnelykke, K., Torrent, M., Andersson, M., Sleiman, P., Hakonarson, H., Cookson, W. O., Moffatt, M. F., Paternoster, L., Melén, E., Sunyer, J., Bisgaard, H., Koppelman, G. H., Ege, M., Custovic, A., Heinrich, J., Gilliland, F. D., Henderson, A. J., Jaddoe, V. W. V., de Jongste, J. C., & EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium (2013). Fraction of exhaled nitric oxide values in childhood are associated with 17q11.2-q12 and 17q12-q21 variants. Journal of Allergy and Clinical Immunology, 134(1), 46-55. doi:10.1016/j.jaci.2013.08.053.

    Abstract

    BACKGROUND: The fraction of exhaled nitric oxide (Feno) value is a biomarker of eosinophilic airway inflammation and is associated with childhood asthma. Identification of common genetic variants associated with childhood Feno values might help to define biological mechanisms related to specific asthma phenotypes.
    OBJECTIVE: We sought to identify the genetic variants associated with childhood Feno values and their relation with asthma.
    METHODS: Feno values were measured in children age 5 to 15 years. In 14 genome-wide association studies (N = 8,858), we examined the associations of approximately 2.5 million single nucleotide polymorphisms (SNPs) with Feno values. Subsequently, we assessed whether significant SNPs were expression quantitative trait loci in genome-wide expression data sets of lymphoblastoid cell lines (n = 1,830) and were related to asthma in a previously published genome-wide association data set (cases, n = 10,365; control subjects: n = 16,110).
    RESULTS: We identified 3 SNPs associated with Feno values: rs3751972 in LYR motif containing 9 (LYRM9; P = 1.97 × 10(-10)) and rs944722 in inducible nitric oxide synthase 2 (NOS2; P = 1.28 × 10(-9)), both of which are located at 17q11.2-q12, and rs8069176 near gasdermin B (GSDMB; P = 1.88 × 10(-8)) at 17q12-q21. We found a cis expression quantitative trait locus for the transcript soluble galactoside-binding lectin 9 (LGALS9) that is in linkage disequilibrium with rs944722. rs8069176 was associated with GSDMB and ORM1-like 3 (ORMDL3) expression. rs8069176 at 17q12-q21, but not rs3751972 and rs944722 at 17q11.2-q12, were associated with physician-diagnosed asthma.
    CONCLUSION: This study identified 3 variants associated with Feno values, explaining 0.95% of the variance. Identification of functional SNPs and haplotypes in these regions might provide novel insight into the regulation of Feno values. This study highlights that both shared and distinct genetic factors affect Feno values and childhood asthma.
  • Van Stekelenburg, J., Anikina, N. C., Pouw, W., Petrovic, I., & Nederlof, N. (2013). From correlation to causation: The cruciality of a collectivity in the context of collective action. Journal of Social and Political Psychology, 1(1), 161-187. doi:10.5964/jspp.v1i1.38.

    Abstract

    This paper discusses a longitudinal field study on collective action which aims to move beyond student samples and enhance mundane realism. First we provide a historical overview of the literature on the what (i.e., antecedents of collective action) and the how (i.e., the methods employed) of the social psychology of protest. This historical overview is substantiated with meta-analytical evidence on how these antecedents and methods changed over time. After the historical overview, we provide an empirical illustration of a longitudinal field study in a natural setting―a newly-built Dutch neighbourhood. We assessed changes in informal embeddedness, efficacy, identification, emotions, and grievances over time. Between t0 and t1 the residents protested against the plan to allow a mosque to carrying out their services in a community building in the neighbourhood. We examined the antecedents of protest before [t0] and after [t1] the protests, and whether residents participated or not. We show how a larger social network functions as a catalyst in steering protest participation. Our longitudinal field study replicates basic findings from experimental and survey research. However, it also shows that one antecedent in particular, which is hard to manipulate in the lab (i.e., the size of someone’s social network), proved to be of great importance. We suggest that in overcoming our most pertinent challenge―causality―we should not only remain in our laboratories but also go out and examine real-life situations with people situated in real-life social networks.
  • Van Berkum, J. J. A., De Goede, D., Van Alphen, P. M., Mulder, E. R., & Kerstholt, J. H. (2013). How robust is the language architecture? The case of mood. Frontiers in Psychology, 4: 505. doi:10.3389/fpsyg.2013.00505.

    Abstract

    In neurocognitive research on language, the processing principles of the system at hand are usually assumed to be relatively invariant. However, research on attention, memory, decision-making, and social judgment has shown that mood can substantially modulate how the brain processes information. For example, in a bad mood, people typically have a narrower focus of attention and rely less on heuristics. In the face of such pervasive mood effects elsewhere in the brain, it seems unlikely that language processing would remain untouched. In an EEG experiment, we manipulated the mood of participants just before they read texts that confirmed or disconfirmed verb-based expectations about who would be talked about next (e.g., that “David praised Linda because … ” would continue about Linda, not David), or that respected or violated a syntactic agreement rule (e.g., “The boys turns”). ERPs showed that mood had little effect on syntactic parsing, but did substantially affect referential anticipation: whereas readers anticipated information about a specific person when they were in a good mood, a bad mood completely abolished such anticipation. A behavioral follow-up experiment suggested that a bad mood did not interfere with verb-based expectations per se, but prevented readers from using that information rapidly enough to predict upcoming reference on the fly, as the sentence unfolds. In all, our results reveal that background mood, a rather unobtrusive affective state, selectively changes a crucial aspect of real-time language processing. This observation fits well with other observed interactions between language processing and affect (emotions, preferences, attitudes, mood), and more generally testifies to the importance of studying “cold” cognitive functions in relation to “hot” aspects of the brain.
  • Van den Bos, E., & Poletiek, F. H. (2008). Intentional artificial grammar learning: When does it work? European Journal of Cognitive Psychology, 20(4), 793-806. doi:10.1080/09541440701554474.

    Abstract

    Actively searching for the rules of an artificial grammar has often been shown to produce no more knowledge than memorising exemplars without knowing that they have been generated by a grammar. The present study investigated whether this ineffectiveness of intentional learning could be overcome by removing dual task demands and providing participants with more specific instructions. The results only showed a positive effect of learning intentionally for participants specifically instructed to find out which letters are allowed to follow each other. These participants were also unaffected by a salient feature. In contrast, for participants who did not know what kind of structure to expect, intentional learning was not more effective than incidental learning and knowledge acquisition was guided by salience.
  • Van Valin Jr., R. D. (2013). Head-marking languages and linguistic theory. In B. Bickel, L. A. Grenoble, D. A. Peterson, & A. Timberlake (Eds.), Language typology and historical contingency: In honor of Johanna Nichols (pp. 91-124). Amsterdam: Benjamins.

    Abstract

    In her path-breaking 1986 paper, Johanna Nichols proposed a typological contrast between head-marking and dependent-marking languages. Nichols argues that even though the syntactic relations between the head and its dependents are the same in both types of language, the syntactic “bond” between them is not the same; in dependent-marking languages it is one of government, whereas in head-marking languages it is one of apposition. This distinction raises an important question for linguistic theory: How can this contrast – government versus apposition – which can show up in all of the major phrasal types in a language, be captured? The purpose of this paper is to explore the various approaches that have been taken in an attempt to capture the difference between head-marked and dependent-marked syntax in different linguistic theories. The basic problem that head-marking languages pose for syntactic theory will be presented, and then generative approaches will be discussed. The analysis of head-marked structure in Role and Reference Grammar will be presented
  • Van Valin Jr., R. D. (2013). Lexical representation, co-composition, and linking syntax and semantics. In J. Pustejovsky, P. Bouillon, H. Isahara, K. Kanzaki, & C. Lee (Eds.), Advances in generative lexicon theory (pp. 67-107). Dordrecht: Springer.
  • Van der Zande, P., Jesse, A., & Cutler, A. (2013). Lexically guided retuning of visual phonetic categories. Journal of the Acoustical Society of America, 134, 562-571. doi:10.1121/1.4807814.

    Abstract

    Listeners retune the boundaries between phonetic categories to adjust to individual speakers' productions. Lexical information, for example, indicates what an unusual sound is supposed to be, and boundary retuning then enables the speaker's sound to be included in the appropriate auditory phonetic category. In this study, it was investigated whether lexical knowledge that is known to guide the retuning of auditory phonetic categories, can also retune visual phonetic categories. In Experiment 1, exposure to a visual idiosyncrasy in ambiguous audiovisually presented target words in a lexical decision task indeed resulted in retuning of the visual category boundary based on the disambiguating lexical context. In Experiment 2 it was tested whether lexical information retunes visual categories directly, or indirectly through the generalization from retuned auditory phonetic categories. Here, participants were exposed to auditory-only versions of the same ambiguous target words as in Experiment 1. Auditory phonetic categories were retuned by lexical knowledge, but no shifts were observed for the visual phonetic categories. Lexical knowledge can therefore guide retuning of visual phonetic categories, but lexically guided retuning of auditory phonetic categories is not generalized to visual categories. Rather, listeners adjust auditory and visual phonetic categories to talker idiosyncrasies separately.
  • Van Valin Jr., R. D. (Ed.). (2008). Investigations of the syntax-semantic-pragmatics interface. Amsterdam: Benjamins.

    Abstract

    Investigations of the Syntax-Semantics-Pragmatics Interface presents on-going research in Role and Reference Grammar in a number of critical areas of linguistic theory: verb semantics and argument structure, the nature of syntactic categories and syntactic representation, prosody and syntax, information structure and syntax, and the syntax and semantics of complex sentences. In each of these areas there are important results which not only advance the development of the theory, but also contribute to the broader theoretical discussion. In particular, there are analyses of grammatical phenomena such as transitivity in Kabardian, the verb-less numeral quantifier construction in Japanese, and an unusual kind of complex sentence in Wari’ (Chapakuran, Brazil) which not only illustrate the descriptive and explanatory power of the theory, but also present interesting challenges to other approaches. In addition, there are papers looking at the implications and applications of Role and Reference Grammar for neurolinguistic research, parsing and automated text analysis.
  • Van Geenhoven, V. (1998). On the Argument Structure of some Noun Incorporating Verbs in West Greenlandic. In M. Butt, & W. Geuder (Eds.), The Projection of Arguments - Lexical and Compositional Factors (pp. 225-263). Stanford, CA, USA: CSLI Publications.
  • Van Wingen, G. A., Van Broekhoven, F., Verkes, R. J., Petersson, K. M., Bäckström, T., Buitelaar, J. K., & Fernández, G. (2008). Progesterone selectively increases amygdala reactivity in women. Molecular Psychiatry, 13, 325-333. doi:doi:10.1038/sj.mp.4002030.

    Abstract

    The acute neural effects of progesterone are mediated by its neuroactive metabolites allopregnanolone and pregnanolone. These neurosteroids potentiate the inhibitory actions of c-aminobutyric acid (GABA). Progesterone is known to produce anxiolytic effects in animals, but recent animal studies suggest that pregnanolone increases anxiety after a period of low allopregnanolone concentration. This effect is potentially mediated by the amygdala and related to the negative mood symptoms in humans that are observed during increased allopregnanolone levels. Therefore, we investigated with functional magnetic resonance imaging (MRI) whether a single progesterone administration to healthy young women in their follicular phase modulates the amygdala response to salient, biologically relevant stimuli. The progesterone administration increased the plasma concentrations of progesterone and allopregnanolone to levels that are reached during the luteal phase and early pregnancy. The imaging results show that progesterone selectively increased amygdala reactivity. Furthermore, functional connectivity analyses indicate that progesterone modulated functional coupling of the amygdala with distant brain regions. These results reveal a neural mechanism by which progesterone may mediate adverse effects on anxiety and mood.
  • Van Leeuwen, T. M., Hagoort, P., & Händel, B. F. (2013). Real color captures attention and overrides spatial cues in grapheme-color synesthetes but not in controls. Neuropsychologia, 51(10), 1802-1813. doi:10.1016/j.neuropsychologia.2013.06.024.

    Abstract

    Grapheme-color synesthetes perceive color when reading letters or digits. We investigated oscillatory brain signals of synesthetes vs. controls using magnetoencephalography. Brain oscillations specifically in the alpha band (∼10 Hz) have two interesting features: alpha has been linked to inhibitory processes and can act as a marker for attention. The possible role of reduced inhibition as an underlying cause of synesthesia, as well as the precise role of attention in synesthesia is widely discussed. To assess alpha power effects due to synesthesia, synesthetes as well as matched controls viewed synesthesia-inducing graphemes, colored control graphemes, and non-colored control graphemes while brain activity was recorded. Subjects had to report a color change at the end of each trial which allowed us to assess the strength of synesthesia in each synesthete. Since color (synesthetic or real) might allocate attention we also included an attentional cue in our paradigm which could direct covert attention. In controls the attentional cue always caused a lateralization of alpha power with a contralateral decrease and ipsilateral alpha increase over occipital sensors. In synesthetes, however, the influence of the cue was overruled by color: independent of the attentional cue, alpha power decreased contralateral to the color (synesthetic or real). This indicates that in synesthetes color guides attention. This was confirmed by reaction time effects due to color, i.e. faster RTs for the color side independent of the cue. Finally, the stronger the observed color dependent alpha lateralization, the stronger was the manifestation of synesthesia as measured by congruency effects of synesthetic colors on RTs. Behavioral and imaging results indicate that color induces a location-specific, automatic shift of attention towards color in synesthetes but not in controls. We hypothesize that this mechanism can facilitate coupling of grapheme and color during the development of synesthesia.
  • Van Valin Jr., R. D. (1998). The acquisition of WH-questions and the mechanisms of language acquisition. In M. Tomasello (Ed.), The new psychology of language: Cognitive and functional approaches to language structure (pp. 221-249). Mahwah, New Jersey: Erlbaum.
  • Van de Geer, J. P., Levelt, W. J. M., & Plomp, R. (1962). The connotation of musical consonance. Acta Psychologica, 20, 308-319.

    Abstract

    As a preliminary to further research on musical consonance an explanatory investigation was made on the different modes of judgment of musical intervals. This was done by way of a semantic differential. Subjects rated 23 intervals against 10 scales. In a factor analysis three factors appeared: pitch, evaluation and fusion. The relation between these factors and some physical characteristics has been investigated. The scale consonant-dissonant showed to be purely evaluative (in opposition to Stumpf's theory). This evaluative connotation is not in accordance with the musicological meaning of consonance. Suggestions to account for this difference have been given.
  • Van Valin Jr., R. D. (2008). Some remarks on universal grammar. In J. Guo, E. Lieven, N. Budwig, S. Ervin-Tripp, K. Nakamura, & S. Ozcaliskan (Eds.), Crosslinguistic approaches to the psychology of language: Research in the tradition of Dan Isaac Slobin (pp. 311-320). New York: Psychology Press.
  • Van Valin Jr., R. D. (2008). RPs and the nature of lexical and syntactic categories in role and reference grammar. In R. D. Van Valin Jr. (Ed.), Investigations of the syntax-semantics-pragmatics interface (pp. 161-178). Amsterdam: Benjamins.
  • Van Putten, S. (2013). The meaning of the Avatime additive particle tsye. In M. Balbach, L. Benz, S. Genzel, M. Grubic, A. Renans, S. Schalowski, M. Stegenwallner, & A. Zeldes (Eds.), Information structure: Empirical perspectives on theory (pp. 55-74). Potsdam: Universitätsverlag Potsdam. Retrieved from http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:de:kobv:517-opus-64804.
  • Van Geert, E., Ding, R., & Wagemans, J. (2024). A cross-cultural comparison of aesthetic preferences for neatly organized compositions: Native Chinese- versus Native Dutch-speaking samples. Empirical Studies of the Arts. Advance online publication. doi:10.1177/02762374241245917.

    Abstract

    Do aesthetic preferences for images of neatly organized compositions (e.g., images collected on blogs like Things Organized Neatly©) generalize across cultures? In an earlier study, focusing on stimulus and personal properties related to order and complexity, Western participants indicated their preference for one of two simultaneously presented images (100 pairs). In the current study, we compared the data of the native Dutch-speaking participants from this earlier sample (N = 356) to newly collected data from a native Chinese-speaking sample (N = 220). Overall, aesthetic preferences were quite similar across cultures. When relating preferences for each sample to ratings of order, complexity, soothingness, and fascination collected from a Western, mainly Dutch-speaking sample, the results hint at a cross-culturally consistent preference for images that Western participants rate as more ordered, but a cross-culturally diverse relation between preferences and complexity.
  • Van der Werff, J., Ravignani, A., & Jadoul, Y. (2024). thebeat: A Python package for working with rhythms and other temporal sequences. Behavior Research Methods, 56, 3725-3736. doi:10.3758/s13428-023-02334-8.

    Abstract

    thebeat is a Python package for working with temporal sequences and rhythms in the behavioral and cognitive sciences, as well as in bioacoustics. It provides functionality for creating experimental stimuli, and for visualizing and analyzing temporal data. Sequences, sounds, and experimental trials can be generated using single lines of code. thebeat contains functions for calculating common rhythmic measures, such as interval ratios, and for producing plots, such as circular histograms. thebeat saves researchers time when creating experiments, and provides the first steps in collecting widely accepted methods for use in timing research. thebeat is an open-source, on-going, and collaborative project, and can be extended for use in specialized subfields. thebeat integrates easily with the existing Python ecosystem, allowing one to combine our tested code with custom-made scripts. The package was specifically designed to be useful for both skilled and novice programmers. thebeat provides a foundation for working with temporal sequences onto which additional functionality can be built. This combination of specificity and plasticity should facilitate research in multiple research contexts and fields of study.
  • van der Burght, C. L., & Meyer, A. S. (2024). Interindividual variation in weighting prosodic and semantic cues during sentence comprehension – a partial replication of Van der Burght et al. (2021). In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 792-796). doi:10.21437/SpeechProsody.2024-160.

    Abstract

    Contrastive pitch accents can mark sentence elements occupying parallel roles. In “Mary kissed John, not Peter”, a pitch accent on Mary or John cues the implied syntactic role of Peter. Van der Burght, Friederici, Goucha, and Hartwigsen (2021) showed that listeners can build expectations concerning syntactic and semantic properties of upcoming words, derived from pitch accent information they heard previously. To further explore these expectations, we attempted a partial replication of the original German study in Dutch. In the experimental sentences “Yesterday, the police officer arrested the thief, not the inspector/murderer”, a pitch accent on subject or object cued the subject/object role of the ellipsis clause. Contrasting elements were additionally cued by the thematic role typicality of the nouns. Participants listened to sentences in which the ellipsis clause was omitted and selected the most plausible sentence-final noun (presented visually) via button press. Replicating the original study results, listeners based their sentence-final preference on the pitch accent information available in the sentence. However, as in the original study, individual differences between listeners were found, with some following prosodic information and others relying on a structural bias. The results complement the literature on ellipsis resolution and on interindividual variability in cue weighting.
  • Váradi, T., Wittenburg, P., Krauwer, S., Wynne, M., & Koskenniemi, K. (2008). CLARIN: Common language resources and technology infrastructure. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    This paper gives an overview of the CLARIN project [1], which aims to create a research infrastructure that makes language resources and technology (LRT) available and readily usable to scholars of all disciplines, in particular the humanities and social sciences (HSS).
  • Vaughn, C., & Brouwer, S. (2013). Perceptual integration of indexical information in bilingual speech. Proceedings of Meetings on Acoustics, 19: 060208. doi:10.1121/1.4800264.

    Abstract

    The present research examines how different types of indexical information, namely talker information and the language being spoken, are perceptually integrated in bilingual speech. Using a speeded classification paradigm (Garner, 1974), variability in characteristics of the talker (gender in Experiment 1 and specific talker in Experiment 2) and in the language being spoken (Mandarin vs. English) was manipulated. Listeners from two different language backgrounds, English monolinguals and Mandarin-English bilinguals, were asked to classify short, meaningful sentences obtained from different Mandarin-English bilingual talkers on these indexical dimensions. Results for the gender-language classification (Exp. 1) showed a significant, symmetrical interference effect for both listener groups, indicating that gender information and language are processed in an integral manner. For talker-language classification (Exp. 2), language interfered more with talker than vice versa for the English monolinguals, but symmetrical interference was found for the Mandarin-English bilinguals. These results suggest both that talker-specificity is not fully segregated from language-specificity, and that bilinguals exhibit more balanced classification along various indexical dimensions of speech. Currently, follow-up studies investigate this talker-language dependency for bilingual listeners who do not speak Mandarin in order to disentangle the role of bilingualism versus language familiarity.
  • Verdonschot, R. G., La Heij, W., Tamaoka, K., Kiyama, S., You, W.-P., & Schiller, N. O. (2013). The multiple pronunciations of Japanese kanji: A masked priming investigation. Quarterly Journal of Experimental Psychology, 66(10), 2023-2038. doi:10.1080/17470218.2013.773050.

    Abstract

    English words with an inconsistent grapheme-to-phoneme conversion or with more than one pronunciation (homographic heterophones; e.g., lead-/l epsilon d/, /lid/) are read aloud more slowly than matched controls, presumably due to competition processes. In Japanese kanji, the majority of the characters have multiple readings for the same orthographic unit: the native Japanese reading (KUN) and the derived Chinese reading (ON). This leads to the question of whether reading these characters also shows processing costs. Studies examining this issue have provided mixed evidence. The current study addressed the question of whether processing of these kanji characters leads to the simultaneous activation of their KUN and ON reading, This was measured in a direct way in a masked priming paradigm. In addition, we assessed whether the relative frequencies of the KUN and ON pronunciations (dominance ratio, measured in compound words) affect the amount of priming. The results of two experiments showed that: (a) a single kanji, presented as a masked prime, facilitates the reading of the (katakana transcriptions of) their KUN and ON pronunciations; however, (b) this was most consistently found when the dominance ratio was around 50% (no strong dominance towards either pronunciation) and when the dominance was towards the ON reading (high-ON group). When the dominance was towards the KUN reading (high-KUN group), no significant priming for the ON reading was observed. Implications for models of kanji processing are discussed.
  • Verdonschot, R. G., Nakayama, M., Zhang, Q., Tamaoka, K., & Schiller, N. O. (2013). The proximate phonological unit of Chinese-English bilinguals: Proficiency matters. PLoS One, 8(4): e61454. doi:10.1371/journal.pone.0061454.

    Abstract

    An essential step to create phonology according to the language production model by Levelt, Roelofs and Meyer is to assemble phonemes into a metrical frame. However, recently, it has been proposed that different languages may rely on different grain sizes of phonological units to construct phonology. For instance, it has been proposed that, instead of phonemes, Mandarin Chinese uses syllables and Japanese uses moras to fill the metrical frame. In this study, we used a masked priming-naming task to investigate how bilinguals assemble their phonology for each language when the two languages differ in grain size. Highly proficient Mandarin Chinese-English bilinguals showed a significant masked onset priming effect in English (L2), and a significant masked syllabic priming effect in Mandarin Chinese (L1). These results suggest that their proximate unit is phonemic in L2 (English), and that bilinguals may use different phonological units depending on the language that is being processed. Additionally, under some conditions, a significant sub-syllabic priming effect was observed even in Mandarin Chinese, which indicates that L2 phonology exerts influences on L1 target processing as a consequence of having a good command of English.

    Additional information

    English stimuli Chinese stimuli
  • Verdonschot, R. G., Van der Wal, J., Lewis, A. G., Knudsen, B., Von Grebmer zu Wolfsthurn, S., Schiller, N. O., & Hagoort, P. (2024). Information structure in Makhuwa: Electrophysiological evidence for a universal processing account. Proceedings of the National Academy of Sciences of the United States of America, 121(30): e2315438121. doi:10.1073/pnas.2315438121.

    Abstract

    There is evidence from both behavior and brain activity that the way information is structured, through the use of focus, can up-regulate processing of focused constituents, likely to give prominence to the relevant aspects of the input. This is hypothesized to be universal, regardless of the different ways in which languages encode focus. In order to test this universalist hypothesis, we need to go beyond the more familiar linguistic strategies for marking focus, such as by means of intonation or specific syntactic structures (e.g., it-clefts). Therefore, in this study, we examine Makhuwa-Enahara, a Bantu language spoken in northern Mozambique, which uniquely marks focus through verbal conjugation. The participants were presented with sentences that consisted of either a semantically anomalous constituent or a semantically nonanomalous constituent. Moreover, focus on this particular constituent could be either present or absent. We observed a consistent pattern: Focused information generated a more negative N400 response than the same information in nonfocus position. This demonstrates that regardless of how focus is marked, its consequence seems to result in an upregulation of processing of information that is in focus.

    Additional information

    supplementary materials
  • Verga, L., & Kotz, S. A. (2013). How relevant is social interaction in second language learning? Frontiers in Human Neuroscience, 7: 550. doi:10.3389/fnhum.2013.00550.

    Abstract

    Verbal language is the most widespread mode of human communication, and an intrinsically social activity. This claim is strengthened by evidence emerging from different fields, which clearly indicates that social interaction influences human communication, and more specifically, language learning. Indeed, research conducted with infants and children shows that interaction with a caregiver is necessary to acquire language. Further evidence on the influence of sociality on language comes from social and linguistic pathologies, in which deficits in social and linguistic abilities are tightly intertwined, as is the case for Autism, for example. However, studies on adult second language (L2) learning have been mostly focused on individualistic approaches, partly because of methodological constraints, especially of imaging methods. The question as to whether social interaction should be considered as a critical factor impacting upon adult language learning still remains underspecified. Here, we review evidence in support of the view that sociality plays a significant role in communication and language learning, in an attempt to emphasize factors that could facilitate this process in adult language learning. We suggest that sociality should be considered as a potentially influential factor in adult language learning and that future studies in this domain should explicitly target this factor.
  • Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O. Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O., Saffery, R., Bønnelykke, K., Reilly, S., Pennell, C. E., Wake, M., Cecil, C. A., Plomin, R., Fisher, S. E., & St Pourcain, B. (2024). Genome-wide analyses of vocabulary size in infancy and toddlerhood: Associations with Attention-Deficit/Hyperactivity Disorder and cognition-related traits. Biological Psychiatry, 95(1), 859-869. doi:10.1016/j.biopsych.2023.11.025.

    Abstract

    Background

    The number of words children produce (expressive vocabulary) and understand (receptive vocabulary) changes rapidly during early development, partially due to genetic factors. Here, we performed a meta–genome-wide association study of vocabulary acquisition and investigated polygenic overlap with literacy, cognition, developmental phenotypes, and neurodevelopmental conditions, including attention-deficit/hyperactivity disorder (ADHD).

    Methods

    We studied 37,913 parent-reported vocabulary size measures (English, Dutch, Danish) for 17,298 children of European descent. Meta-analyses were performed for early-phase expressive (infancy, 15–18 months), late-phase expressive (toddlerhood, 24–38 months), and late-phase receptive (toddlerhood, 24–38 months) vocabulary. Subsequently, we estimated single nucleotide polymorphism–based heritability (SNP-h2) and genetic correlations (rg) and modeled underlying factor structures with multivariate models.

    Results

    Early-life vocabulary size was modestly heritable (SNP-h2 = 0.08–0.24). Genetic overlap between infant expressive and toddler receptive vocabulary was negligible (rg = 0.07), although each measure was moderately related to toddler expressive vocabulary (rg = 0.69 and rg = 0.67, respectively), suggesting a multifactorial genetic architecture. Both infant and toddler expressive vocabulary were genetically linked to literacy (e.g., spelling: rg = 0.58 and rg = 0.79, respectively), underlining genetic similarity. However, a genetic association of early-life vocabulary with educational attainment and intelligence emerged only during toddlerhood (e.g., receptive vocabulary and intelligence: rg = 0.36). Increased ADHD risk was genetically associated with larger infant expressive vocabulary (rg = 0.23). Multivariate genetic models in the ALSPAC (Avon Longitudinal Study of Parents and Children) cohort confirmed this finding for ADHD symptoms (e.g., at age 13; rg = 0.54) but showed that the association effect reversed for toddler receptive vocabulary (rg = −0.74), highlighting developmental heterogeneity.

    Conclusions

    The genetic architecture of early-life vocabulary changes during development, shaping polygenic association patterns with later-life ADHD, literacy, and cognition-related traits.
  • Verhoeven, V. J. M., Hysi, P. G., Wojciechowski, R., Fan, Q., Guggenheim, J. A., Höhn, R., MacGregor, S., Hewitt, A. W., Nag, A., Cheng, C.-Y., Yonova-Doing, E., Zhou, X., Ikram, M. K., Buitendijk, G. H. S., McMahon, G., Kemp, J. P., St Pourcain, B., Simpson, C. L., Mäkelä, K.-M., Lehtimäki, T. and 90 moreVerhoeven, V. J. M., Hysi, P. G., Wojciechowski, R., Fan, Q., Guggenheim, J. A., Höhn, R., MacGregor, S., Hewitt, A. W., Nag, A., Cheng, C.-Y., Yonova-Doing, E., Zhou, X., Ikram, M. K., Buitendijk, G. H. S., McMahon, G., Kemp, J. P., St Pourcain, B., Simpson, C. L., Mäkelä, K.-M., Lehtimäki, T., Kähönen, M., Paterson, A. D., Hosseini, S. M., Wong, H. S., Xu, L., Jonas, J. B., Pärssinen, O., Wedenoja, J., Yip, S. P., Ho, D. W. H., Pang, C. P., Chen, L. J., Burdon, K. P., Craig, J. E., Klein, B. E. K., Klein, R., Haller, T., Metspalu, A., Khor, C.-C., Tai, E.-S., Aung, T., Vithana, E., Tay, W.-T., Barathi, V. A., Chen, P., Li, R., Liao, J., Zheng, Y., Ong, R. T., Döring, A., Evans, D. M., Timpson, N. J., Verkerk, A. J. M. H., Meitinger, T., Raitakari, O., Hawthorne, F., Spector, T. D., Karssen, L. C., Pirastu, M., Murgia, F., Ang, W., Mishra, A., Montgomery, G. W., Pennell, C. E., Cumberland, P. M., Cotlarciuc, I., Mitchell, P., Wang, J. J., Schache, M., Janmahasatian, S., Janmahasathian, S., Igo, R. P., Lass, J. H., Chew, E., Iyengar, S. K., Gorgels, T. G. M. F., Rudan, I., Hayward, C., Wright, A. F., Polasek, O., Vatavuk, Z., Wilson, J. F., Fleck, B., Zeller, T., Mirshahi, A., Müller, C., Uitterlinden, A. G., Rivadeneira, F., Vingerling, J. R., Hofman, A., Oostra, B. A., Amin, N., Bergen, A. A. B., Teo, Y.-Y., Rahi, J. S., Vitart, V., Williams, C., Baird, P. N., Wong, T.-Y., Oexle, K., Pfeiffer, N., Mackey, D. A., Young, T. L., van Duijn, C. M., Saw, S.-M., Bailey-Wilson, J. E., Stambolian, D., Klaver, C. C., Hammond, C. J., Consortium for Refractive Error and Myopia (CREAM), The Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) Research Group, Wellcome Trust Case Control Consortium 2 (WTCCC2), & The Fuchs' Genetics Multi-Center Study Group (2013). Genome-wide meta-analyses of multiancestry cohorts identify multiple new susceptibility loci for refractive error and myopia. Nature Genetics, 45(3), 314-318. doi:10.1038/ng.2554.

    Abstract

    Refractive error is the most common eye disorder worldwide and is a prominent cause of blindness. Myopia affects over 30% of Western populations and up to 80% of Asians. The CREAM consortium conducted genome-wide meta-analyses, including 37,382 individuals from 27 studies of European ancestry and 8,376 from 5 Asian cohorts. We identified 16 new loci for refractive error in individuals of European ancestry, of which 8 were shared with Asians. Combined analysis identified 8 additional associated loci. The new loci include candidate genes with functions in neurotransmission (GRIA4), ion transport (KCNQ5), retinoic acid metabolism (RDH5), extracellular matrix remodeling (LAMA2 and BMP2) and eye development (SIX6 and PRSS56). We also confirmed previously reported associations with GJD2 and RASGRF1. Risk score analysis using associated SNPs showed a tenfold increased risk of myopia for individuals carrying the highest genetic load. Our results, based on a large meta-analysis across independent multiancestry studies, considerably advance understanding of the mechanisms involved in refractive error and myopia.
  • Verkerk, A., & Lestrade, S. (2008). The encoding of adjectives. In M. Van Koppen, & B. Botma (Eds.), Linguistics in the Netherlands 2008 (pp. 157-168). Amsterdam: Benjamins.

    Abstract

    In this paper, we will give a unified account of the cross-linguistic variation in the encoding of adjectives in predicative and attributive constructions. Languages may differ in the encoding strategy of adjectives in the predicative domain (Stassen 1997), and sometimes change this strategy in the attributive domain (Verkerk 2007). We will show that the interaction of two principles, that of faithfulness to the semantic class of a lexical root and that of faithfulness to discourse functions, can account for all attested variation in the encoding of adjectives.
  • Verkerk, A., & Frostad, B. H. (2013). The encoding of manner predications and resultatives in Oceanic: A typological and historical overview. Oceanic Linguistics, 52, 1-35. doi:10.1353/ol.2013.0010.

    Abstract

    This paper is concerned with the encoding of resultatives and manner predications in Oceanic languages. Our point of departure is a typological overview of the encoding strategies and their geographical distribution, and we investigate their historical traits by the use of phylogenetic comparative methods. A full theory of the historical pathways is not always accessible for all the attested encoding strategies, given the data available for this study. However, tentative theories about the development and origin of the attested strategies are given. One of the most frequent strategy types used to encode both manner predications and resultatives has been given special emphasis. This is a construction in which a reex form of the Proto-Oceanic causative *pa-/*paka- modies the second verb in serial verb constructions

    Additional information

    52.1.verkerk_supp01.pdf
  • Verkerk, A. (2013). Scramble, scurry and dash: The correlation between motion event encoding and manner verb lexicon size in Indo-European. Language Dynamics and Change, 3, 169-217. doi:10.1163/22105832-13030202.

    Abstract

    In recent decades, much has been discovered about the different ways in which people can talk about motion (Talmy, 1985, 1991; Slobin, 1996, 1997, 2004). Slobin (1997) has suggested that satellite-framed languages typically have a larger and more diverse lexicon of manner of motion verbs (such as run, fly, and scramble) when compared to verb-framed languages. Slobin (2004) has claimed that larger manner of motion verb lexicons originate over time because codability factors increase the accessibility of manner in satellite-framed languages. In this paper I investigate the dependency between the use of the satellite-framed encoding construction and the size of the manner verb lexicon. The data used come from 20 Indo-European languages. The methodology applied is a range of phylogenetic comparative methods adopted from biology, which allow for an investigation of this dependency while taking into account the shared history between these 20 languages. The results provide evidence that Slobin’s hypothesis was correct, and indeed there seems to be a relationship between the use of the satellite-framed construction and the size of the manner verb lexicon
  • Vernes, S. C., Newbury, D. F., Abrahams, B. S., Winchester, L., Nicod, J., Groszer, M., Alarcón, M., Oliver, P. L., Davies, K. E., Geschwind, D. H., Monaco, A. P., & Fisher, S. E. (2008). A functional genetic link between distinct developmental language disorders. New England Journal of Medicine, 359(22), 2337 -2345. doi:10.1056/NEJMoa0802828.

    Abstract

    BACKGROUND: Rare mutations affecting the FOXP2 transcription factor cause a monogenic speech and language disorder. We hypothesized that neural pathways downstream of FOXP2 influence more common phenotypes, such as specific language impairment. METHODS: We performed genomic screening for regions bound by FOXP2 using chromatin immunoprecipitation, which led us to focus on one particular gene that was a strong candidate for involvement in language impairments. We then tested for associations between single-nucleotide polymorphisms (SNPs) in this gene and language deficits in a well-characterized set of 184 families affected with specific language impairment. RESULTS: We found that FOXP2 binds to and dramatically down-regulates CNTNAP2, a gene that encodes a neurexin and is expressed in the developing human cortex. On analyzing CNTNAP2 polymorphisms in children with typical specific language impairment, we detected significant quantitative associations with nonsense-word repetition, a heritable behavioral marker of this disorder (peak association, P=5.0x10(-5) at SNP rs17236239). Intriguingly, this region coincides with one associated with language delays in children with autism. CONCLUSIONS: The FOXP2-CNTNAP2 pathway provides a mechanistic link between clinically distinct syndromes involving disrupted language.

    Additional information

    nejm_vernes_2337sa1.pdf
  • Vernes, S. C., & Fisher, S. E. (2013). Genetic pathways implicated in speech and language. In S. Helekar (Ed.), Animal models of speech and language disorders (pp. 13-40). New York: Springer. doi:10.1007/978-1-4614-8400-4_2.

    Abstract

    Disorders of speech and language are highly heritable, providing strong
    support for a genetic basis. However, the underlying genetic architecture is complex,
    involving multiple risk factors. This chapter begins by discussing genetic loci associated
    with common multifactorial language-related impairments and goes on to
    detail the only gene (known as FOXP2) to be directly implicated in a rare monogenic
    speech and language disorder. Although FOXP2 was initially uncovered in
    humans, model systems have been invaluable in progressing our understanding of
    the function of this gene and its associated pathways in language-related areas of the
    brain. Research in species from mouse to songbird has revealed effects of this gene
    on relevant behaviours including acquisition of motor skills and learned vocalisations
    and demonstrated a role for Foxp2 in neuronal connectivity and signalling,
    particularly in the striatum. Animal models have also facilitated the identification of
    wider neurogenetic networks thought to be involved in language development and
    disorder and allowed the investigation of new candidate genes for disorders involving
    language, such as CNTNAP2 and FOXP1. Ongoing work in animal models promises
    to yield new insights into the genetic and neural mechanisms underlying human
    speech and language
  • Viaro, M., Bercelli, F., & Rossano, F. (2008). Una relazione terapeutica: Il terapeuta allenatore. Connessioni: Rivista di consulenza e ricerca sui sistemi umani, 20, 95-105.
  • von Stutterheim, C., Flecken, M., & Carroll, M. (2013). Introduction: Conceptualizing in a second language. International Review of Applied Linguistics in Language Teaching, 51(2), 77-85. doi:10.1515/iral-2013-0004.

Share this page