Publications

Displaying 1301 - 1400 of 1633
  • Simanova, I., Van Gerven, M., Oostenveld, R., & Hagoort, P. (2010). Identifying object categories from event-related EEG: Toward decoding of conceptual representations. Plos One, 5(12), E14465. doi:10.1371/journal.pone.0014465.

    Abstract

    Multivariate pattern analysis is a technique that allows the decoding of conceptual information such as the semantic category of a perceived object from neuroimaging data. Impressive single-trial classification results have been reported in studies that used fMRI. Here, we investigate the possibility to identify conceptual representations from event-related EEG based on the presentation of an object in different modalities: its spoken name, its visual representation and its written name. We used Bayesian logistic regression with a multivariate Laplace prior for classification. Marked differences in classification performance were observed for the tested modalities. Highest accuracies (89% correctly classified trials) were attained when classifying object drawings. In auditory and orthographical modalities, results were lower though still significant for some subjects. The employed classification method allowed for a precise temporal localization of the features that contributed to the performance of the classifier for three modalities. These findings could help to further understand the mechanisms underlying conceptual representations. The study also provides a first step towards the use of concept decoding in the context of real-time brain-computer interface applications.
  • Simanova, I., Hagoort, P., Oostenveld, R., & Van Gerven, M. A. J. (2014). Modality-independent decoding of semantic information from the human brain. Cerebral Cortex, 24, 426-434. doi:10.1093/cercor/bhs324.

    Abstract

    An ability to decode semantic information from fMRI spatial patterns has been demonstrated in previous studies mostly for 1 specific input modality. In this study, we aimed to decode semantic category independent of the modality in which an object was presented. Using a searchlight method, we were able to predict the stimulus category from the data while participants performed a semantic categorization task with 4 stimulus modalities (spoken and written names, photographs, and natural sounds). Significant classification performance was achieved in all 4 modalities. Modality-independent decoding was implemented by training and testing the searchlight method across modalities. This allowed the localization of those brain regions, which correctly discriminated between the categories, independent of stimulus modality. The analysis revealed large clusters of voxels in the left inferior temporal cortex and in frontal regions. These voxels also allowed category discrimination in a free recall session where subjects recalled the objects in the absence of external stimuli. The results show that semantic information can be decoded from the fMRI signal independently of the input modality and have clear implications for understanding the functional mechanisms of semantic memory.
  • Simon, E., & Sjerps, M. J. (2014). Developing non-native vowel representations: a study on child second language acquisition. COPAL: Concordia Working Papers in Applied Linguistics, 5, 693-708.

    Abstract

    This study examines what stage 9‐12‐year‐old Dutch‐speaking children have reached in the development of their L2 lexicon, focusing on its phonological specificity. Two experiments were carried out with a group of Dutch‐speaking children and adults learning English. In a first task, listeners were asked to judge Dutch words which were presented with either the target Dutch vowel or with an English vowel synthetically inserted. The second experiment was a mirror of the first, i.e. with English words and English or Dutch vowels inserted. It was examined to what extent the listeners accepted substitutions of Dutch vowels by English ones, and vice versa. The results of the experiments suggest that the children have not reached the same degree of phonological specificity of L2 words as the adults. Children not only experience a strong influence of their native vowel categories when listening to L2 words, they also apply less strict criteria.
  • Simon, E., Escudero, P., & Broersma, M. (2010). Learning minimally different words in a third language: L2 proficiency as a crucial predictor of accuracy in an L3 word learning task. In K. Diubalska-Kolaczyk, M. Wrembel, & M. Kul (Eds.), Proceedings of the Sixth International Symposium on the Acquisition of Second Language Speech (New Sounds 2010).
  • Simon, E., & Sjerps, M. J. (2017). Phonological category quality in the mental lexicon of child and adult learners. International Journal of Bilingualism, 21(4), 474-499. doi:10.1177/1367006915626589.

    Abstract

    Aims and objectives: The aim was to identify which criteria children use to decide on the category membership of native and non-native vowels, and to get insight into the organization of phonological representations in the bilingual mind. Methodology: The study consisted of two cross-language mispronunciation detection tasks in which L2 vowels were inserted into L1 words and vice versa. In Experiment 1, 10- to 12-year-old Dutch-speaking children were presented with Dutch words which were either pronounced with the target Dutch vowel or with an English vowel inserted in the Dutch consonantal frame. Experiment 2 was a mirror of the first, with English words which were pronounced “correctly” or which were “mispronounced” with a Dutch vowel. Data and analysis: Analyses focused on extent to which child and adult listeners accepted substitutions of Dutch vowels by English ones, and vice versa. Findings: The results of Experiment 1 revealed that between the age of ten and twelve children have well-established phonological vowel categories in their native language. However, Experiment 2 showed that in their non-native language, children tended to accept mispronounced items which involve sounds from their native language. At the same time, though, they did not fully rely on their native phonemic inventory because the children accepted most of the correctly pronounced English items. Originality: While many studies have examined native and non-native perception by infants and adults, studies on first and second language perception of school-age children are rare. This study adds to the body of literature aimed at expanding our knowledge in this area. Implications: The study has implications for models of the organization of the bilingual mind: while proficient adult non-native listeners generally have clearly separated sets of phonological representations for their two languages, for non-proficient child learners the L1 phonology still exerts a strong influence on the L2 phonology.
  • Simon, E., Sjerps, M. J., & Fikkert, P. (2014). Phonological representations in children’s native and non-native lexicon. Bilingualism: Language and Cognition, 17(1), 3-21. doi:10.1017/S1366728912000764.

    Abstract

    This study investigated the phonological representations of vowels in children's native and non-native lexicons. Two experiments were mispronunciation tasks (i.e., a vowel in words was substituted by another vowel from the same language). These were carried out by Dutch-speaking 9–12-year-old children and Dutch-speaking adults, in their native (Experiment 1, Dutch) and non-native (Experiment 2, English) language. A third experiment tested vowel discrimination. In Dutch, both children and adults could accurately detect mispronunciations. In English, adults, and especially children, detected substitutions of native vowels (i.e., vowels that are present in the Dutch inventory) by non-native vowels more easily than changes in the opposite direction. Experiment 3 revealed that children could accurately discriminate most of the vowels. The results indicate that children's L1 categories strongly influenced their perception of English words. However, the data also reveal a hint of the development of L2 phoneme categories.

    Additional information

    Simon_SuppMaterial.pdf
  • Simpson, N. H., Addis, L., Brandler, W. M., Slonims, V., Clark, A., Watson, J., Scerri, T. S., Hennessy, E. R., Stein, J., Talcott, J., Conti-Ramsden, G., O'Hare, A., Baird, G., Fairfax, B. P., Knight, J. C., Paracchini, S., Fisher, S. E., Newbury, D. F., & The SLI Consortium (2014). Increased prevalence of sex chromosome aneuploidies in specific language impairment and dyslexia. Developmental Medicine and Child Neurology, 56, 346-353. doi:10.1111/dmcn.12294.

    Abstract

    Aim Sex chromosome aneuploidies increase the risk of spoken or written language disorders but individuals with specific language impairment (SLI) or dyslexia do not routinely undergo cytogenetic analysis. We assess the frequency of sex chromosome aneuploidies in individuals with language impairment or dyslexia. Method Genome-wide single nucleotide polymorphism genotyping was performed in three sample sets: a clinical cohort of individuals with speech and language deficits (87 probands: 61 males, 26 females; age range 4 to 23 years), a replication cohort of individuals with SLI, from both clinical and epidemiological samples (209 probands: 139 males, 70 females; age range 4 to 17 years), and a set of individuals with dyslexia (314 probands: 224 males, 90 females; age range 7 to 18 years). Results In the clinical language-impaired cohort, three abnormal karyotypic results were identified in probands (proband yield 3.4%). In the SLI replication cohort, six abnormalities were identified providing a consistent proband yield (2.9%). In the sample of individuals with dyslexia, two sex chromosome aneuploidies were found giving a lower proband yield of 0.6%. In total, two XYY, four XXY (Klinefelter syndrome), three XXX, one XO (Turner syndrome), and one unresolved karyotype were identified. Interpretation The frequency of sex chromosome aneuploidies within each of the three cohorts was increased over the expected population frequency (approximately 0.25%) suggesting that genetic testing may prove worthwhile for individuals with language and literacy problems and normal non-verbal IQ. Early detection of these aneuploidies can provide information and direct the appropriate management for individuals.
  • Sjerps, M. J., & McQueen, J. M. (2010). The bounds on flexibility in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 36, 195-211. doi:10.1037/a0016803.
  • Sjerps, M. J., Fox, N. P., Johnson, K., & Chang, E. F. (2019). Speaker-normalized sound representations in the human auditory cortex. Nature Communications, 10: 2465. doi:10.1038/s41467-019-10365-z.

    Abstract

    The acoustic dimensions that distinguish speech sounds (like the vowel differences in “boot” and “boat”) also differentiate speakers’ voices. Therefore, listeners must normalize across speakers without losing linguistic information. Past behavioral work suggests an important role for auditory contrast enhancement in normalization: preceding context affects listeners’ perception of subsequent speech sounds. Here, using intracranial electrocorticography in humans, we investigate whether and how such context effects arise in auditory cortex. Participants identified speech sounds that were preceded by phrases from two different speakers whose voices differed along the same acoustic dimension as target words (the lowest resonance of the vocal tract). In every participant, target vowels evoke a speaker-dependent neural response that is consistent with the listener’s perception, and which follows from a contrast enhancement model. Auditory cortex processing thus displays a critical feature of normalization, allowing listeners to extract meaningful content from the voices of diverse speakers.

    Additional information

    41467_2019_10365_MOESM1_ESM.pdf
  • Skeide, M. A., Kumar, U., Mishra, R. K., Tripathi, V. N., Guleria, A., Singh, J. P., Eisner, F., & Huettig, F. (2017). Learning to read alters cortico-subcortical crosstalk in the visual system of illiterates. Science Advances, 5(3): e1602612. doi:10.1126/sciadv.1602612.

    Abstract

    Learning to read is known to result in a reorganization of the developing cerebral cortex. In this longitudinal resting-state functional magnetic resonance imaging study in illiterate adults we show that only 6 months of literacy training can lead to neuroplastic changes in the mature brain. We observed that literacy-induced neuroplasticity is not confined to the cortex but increases the functional connectivity between the occipital lobe and subcortical areas in the midbrain and
    the thalamus. Individual rates of connectivity increase were significantly related to the individualdecoding skill gains. These findings crucially complement current neurobiological concepts ofnormal and impaired literacy acquisition.
  • Skiba, R., Wittenburg, F., & Trilsbeek, P. (2004). New DoBeS web site: Contents & functions. Language Archive Newsletter, 1(2), 4-4.
  • Skirgard, H., Roberts, S. G., & Yencken, L. (2017). Why are some languages confused for others? Investigating data from the Great Language Game. PLoS One, 12(4): e0165934. doi:10.1371/journal.pone.0165934.

    Abstract

    In this paper we explore the results of a large-scale online game called ‘the Great Language Game’, in which people listen to an audio speech sample and make a forced-choice guess about the identity of the language from 2 or more alternatives. The data include 15 million guesses from 400 audio recordings of 78 languages. We investigate which languages are confused for which in the game, and if this correlates with the similarities that linguists identify between languages. This includes shared lexical items, similar sound inventories and established historical relationships. Our findings are, as expected, that players are more likely to confuse two languages that are objectively more similar. We also investigate factors that may affect players’ ability to accurately select the target language, such as how many people speak the language, how often the language is mentioned in written materials and the economic power of the target language community. We see that non-linguistic factors affect players’ ability to accurately identify the target. For example, languages with wider ‘global reach’ are more often identified correctly. This suggests that both linguistic and cultural knowledge influence the perception and recognition of languages and their similarity.
  • Slobin, D. I., Ibarretxe-Antuñano, I., Kopecka, A., & Majid, A. (2014). Manners of human gait: A crosslinguistic event-naming study. Cognitive Linguistics, 25, 701-741. doi:10.1515/cog-2014-0061.

    Abstract

    Crosslinguistic studies of expressions of motion events have found that Talmy's binary typology of verb-framed and satellite-framed languages is reflected in language use. In particular, Manner of motion is relatively more elaborated in satellite-framed languages (e.g., in narrative, picture description, conversation, translation). The present research builds on previous controlled studies of the domain of human motion by eliciting descriptions of a wide range of manners of walking and running filmed in natural circumstances. Descriptions were elicited from speakers of two satellite-framed languages (English, Polish) and three verb-framed languages (French, Spanish, Basque). The sampling of events in this study resulted in four major semantic clusters for these five languages: walking, running, non-canonical gaits (divided into bounce-and-recoil and syncopated movements), and quadrupedal movement (crawling). Counts of verb types found a broad tendency for satellite-framed languages to show greater lexical diversity, along with substantial within group variation. Going beyond most earlier studies, we also examined extended descriptions of manner of movement, isolating types of manner. The following categories of manner were identified and compared: attitude of actor, rate, effort, posture, and motor patterns of legs and feet. Satellite-framed speakers tended to elaborate expressive manner verbs, whereas verb-framed speakers used modification to add manner to neutral motion verbs
  • Slonimska, A., & Roberts, S. G. (2017). A case for systematic sound symbolism in pragmatics: Universals in wh-words. Journal of Pragmatics, 116, 1-20. doi:10.1016/j.pragma.2017.04.004.

    Abstract

    This study investigates whether there is a universal tendency for content
    interrogative words (wh-­words) within a language to sound similar in order to facilitate
    pragmatic inference in conversation. Gaps between turns in conversation are very
    short, meaning that listeners must begin planning their turn as soon as possible.
    While previous research has shown that paralinguistic features such as prosody and
    eye gaze provide cues to the pragmatic function of upcoming turns, we hypothesise
    that a systematic phonetic cue that marks interrogative words would also help early
    recognition of questions (allowing early preparation of answers), for instance wh-­
    words sounding similar within a language. We analyzed 226 languages from 66
    different language families by means of permutation tests. We found that initial
    segments of wh-­words were more similar within a language than between languages,
    also when controlling for language family, geographic area (stratified permutation)
    and analyzability (compound phrases excluded). Random samples tests revealed that
    initial segments of wh-­words were more similar than initial segments of randomly
    selected word sets and conceptually related word sets (e.g., body parts, actions,
    pronouns). Finally, we hypothesized that this cue would be more useful at the
    beginning of a turn, so the similarity of the initial segment of wh-­words should be
    greater in languages that place them at the beginning of a clause. We gathered
    typological data on 110 languages, and found the predicted trend, although statistical
    significance was not attained. While there may be several mechanisms that bring
    about this pattern (e.g., common derivation), we suggest that the ultimate explanation
    of the similarity of interrogative words is to facilitate early speech-­act recognition.
    Importantly, this hypothesis can be tested empirically, and the current results provide
    a sound basis for future experimental tests.
  • Slonimska, A., & Roberts, S. G. (2017). A case for systematic sound symbolism in pragmatics:The role of the first phoneme in question prediction in context. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 1090-1095). Austin, TX: Cognitive Science Society.

    Abstract

    Turn-taking in conversation is a cognitively demanding process that proceeds rapidly due to interlocutors utilizing a range of cues
    to aid prediction. In the present study we set out to test recent claims that content question words (also called wh-words) sound similar within languages as an adaptation to help listeners predict
    that a question is about to be asked. We test whether upcoming questions can be predicted based on the first phoneme of a turn and the prior context. We analyze the Switchboard corpus of English
    by means of a decision tree to test whether /w/ and /h/ are good statistical cues of upcoming questions in conversation. Based on the results, we perform a controlled experiment to test whether
    people really use these cues to recognize questions. In both studies
    we show that both the initial phoneme and the sequential context help predict questions. This contributes converging evidence that elements of languages adapt to pragmatic pressures applied during
    conversation.
  • Smalle, E., Szmalec, A., Bogaerts, L., Page, M. P. A., Narang, V., Misra, D., Araujo, S., Lohagun, N., Khan, O., Singh, A., Mishra, R. K., & Huettig, F. (2019). Literacy improves short-term serial recall of spoken verbal but not visuospatial items - Evidence from illiterate and literate adults. Cognition, 185, 144-150. doi:10.1016/j.cognition.2019.01.012.

    Abstract

    It is widely accepted that specific memory processes, such as serial-order memory, are involved in written language development and predictive of reading and spelling abilities. The reverse question, namely whether orthographic abilities also affect serial-order memory, has hardly been investigated. In the current study, we compared 20 illiterate people with a group of 20 literate matched controls on a verbal and a visuospatial version of the Hebb paradigm, measuring both short- and long-term serial-order memory abilities. We observed better short-term serial-recall performance for the literate compared with the illiterate people. This effect was stronger in the verbal than in the visuospatial modality, suggesting that the improved capacity of the literate group is a consequence of learning orthographic skills. The long-term consolidation of ordered information was comparable across groups, for both stimulus modalities. The implications of these findings for current views regarding the bi-directional interactions between memory and written language development are discussed.

    Additional information

    Supplementary material Datasets
  • De Smedt, K., Hinrichs, E., Meurers, D., Skadiņa, I., Sanford Pedersen, B., Navarretta, C., Bel, N., Lindén, K., Lopatková, M., Hajič, J., Andersen, G., & Lenkiewicz, P. (2014). CLARA: A new generation of researchers in common language resources and their applications. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 2166-2174).
  • Smeets, C. J. L. M., & Verbeek, D. (2014). Review Cerebellar ataxia and functional genomics: Identifying the routes to cerebellar neurodegeneration. Biochimica et Biophysica Acta: BBA, 1842(10), 2030-2038. doi:10.1016/j.bbadis.2014.04.004.

    Abstract

    Cerebellar ataxias are progressive neurodegenerative disorders characterized by atrophy of the cerebellum leading to motor dysfunction, balance problems, and limb and gait ataxia. These include among others, the dominantly inherited spinocerebellar ataxias, recessive cerebellar ataxias such as Friedreich's ataxia, and X-linked cerebellar ataxias. Since all cerebellar ataxias display considerable overlap in their disease phenotypes, common pathological pathways must underlie the selective cerebellar neurodegeneration. Therefore, it is important to identify the molecular mechanisms and routes to neurodegeneration that cause cerebellar ataxia. In this review, we discuss the use of functional genomic approaches including whole-exome sequencing, genome-wide gene expression profiling, miRNA profiling, epigenetic profiling, and genetic modifier screens to reveal the underlying pathogenesis of various cerebellar ataxias. These approaches have resulted in the identification of many disease genes, modifier genes, and biomarkers correlating with specific stages of the disease. This article is part of a Special Issue entitled: From Genome to Function.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2017). The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration. Journal of Memory and Language, 93, 276-303. doi:10.1016/j.jml.2016.08.005.

    Abstract

    Ambiguity in natural language is ubiquitous, yet spoken communication is effective due to integration of information carried in the speech signal with information available in the surrounding multimodal landscape. Language mediated visual attention requires visual and linguistic information integration and has thus been used to examine properties of the architecture supporting multimodal processing during spoken language comprehension. In this paper we test predictions generated by alternative models of this multimodal system. A model (TRACE) in which multimodal information is combined at the point of the lexical representations of words generated predictions of a stronger effect of phonological rhyme relative to semantic and visual information on gaze behaviour, whereas a model in which sub-lexical information can interact across modalities (MIM) predicted a greater influence of visual and semantic information, compared to phonological rhyme. Two visual world experiments designed to test these predictions offer support for sub-lexical multimodal interaction during online language processing.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Examining strains and symptoms of the ‘Literacy Virus’: The effects of orthographic transparency on phonological processing in a connectionist model of reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    The effect of literacy on phonological processing has been described in terms of a virus that “infects all speech processing” (Frith, 1998). Empirical data has established that literacy leads to changes to the way in which phonological information is processed. Harm & Seidenberg (1999) demonstrated that a connectionist network trained to map between English orthographic and phonological representations display’s more componential phonological processing than a network trained only to stably represent the phonological forms of words. Within this study we use a similar model yet manipulate the transparency of orthographic-to-phonological mappings. We observe that networks trained on a transparent orthography are better at restoring phonetic features and phonemes. However, networks trained on non-transparent orthographies are more likely to restore corrupted phonological segments with legal, coarser linguistic units (e.g. onset, coda). Our study therefore provides an explicit description of how differences in orthographic transparency can lead to varying strains and symptoms of the ‘literacy virus’.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). A comprehensive model of spoken word recognition must be multimodal: Evidence from studies of language-mediated visual attention. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    When processing language, the cognitive system has access to information from a range of modalities (e.g. auditory, visual) to support language processing. Language mediated visual attention studies have shown sensitivity of the listener to phonological, visual, and semantic similarity when processing a word. In a computational model of language mediated visual attention, that models spoken word processing as the parallel integration of information from phonological, semantic and visual processing streams, we simulate such effects of competition within modalities. Our simulations raised untested predictions about stronger and earlier effects of visual and semantic similarity compared to phonological similarity around the rhyme of the word. Two visual world studies confirmed these predictions. The model and behavioral studies suggest that, during spoken word comprehension, multimodal information can be recruited rapidly to constrain lexical selection to the extent that phonological rhyme information may exert little influence on this process.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Literacy effects on language and vision: Emergent effects from an amodal shared resource (ASR) computational model. Cognitive Psychology, 75, 28-54. doi:10.1016/j.cogpsych.2014.07.002.

    Abstract

    Learning to read and write requires an individual to connect additional orthographic representations to pre-existing mappings between phonological and semantic representations of words. Past empirical results suggest that the process of learning to read and write (at least in alphabetic languages) elicits changes in the language processing system, by either increasing the cognitive efficiency of mapping between representations associated with a word, or by changing the granularity of phonological processing of spoken language, or through a combination of both. Behavioural effects of literacy have typically been assessed in offline explicit tasks that have addressed only phonological processing. However, a recent eye tracking study compared high and low literate participants on effects of phonology and semantics in processing measured implicitly using eye movements. High literates’ eye movements were more affected by phonological overlap in online speech than low literates, with only subtle differences observed in semantics. We determined whether these effects were due to cognitive efficiency and/or granularity of speech processing in a multimodal model of speech processing – the amodal shared resource model (ASR, Smith, Monaghan, & Huettig, 2013). We found that cognitive efficiency in the model had only a marginal effect on semantic processing and did not affect performance for phonological processing, whereas fine-grained versus coarse-grained phonological representations in the model simulated the high/low literacy effects on phonological processing, suggesting that literacy has a focused effect in changing the grain-size of phonological mappings.
  • Smits, R. (1998). A model for dependencies in phonetic categorization. Proceedings of the 16th International Congress on Acoustics and the 135th Meeting of the Acoustical Society of America, 2005-2006.

    Abstract

    A quantitative model of human categorization behavior is proposed, which can be applied to 4-alternative forced-choice categorization data involving two binary classifications. A number of processing dependencies between the two classifications are explicitly formulated, such as the dependence of the location, orientation, and steepness of the class boundary for one classification on the outcome of the other classification. The significance of various types of dependencies can be tested statistically. Analyses of a data set from the literature shows that interesting dependencies in human speech recognition can be uncovered using the model.
  • Smits, A., Seijdel, N., Scholte, H., Heywood, C., Kentridge, R., & de Haan, E. (2019). Action blindsight and antipointing in a hemianopic patient. Neuropsychologia, 128, 270-275. doi:10.1016/j.neuropsychologia.2018.03.029.

    Abstract

    Blindsight refers to the observation of residual visual abilities in the hemianopic field of patients without a functional V1. Given the within- and between-subject variability in the preserved abilities and the phenomenal experience of blindsight patients, the fine-grained description of the phenomenon is still debated. Here we tested a patient with established “perceptual” and “attentional” blindsight (c.f. Danckert and Rossetti, 2005). Using a pointing paradigm patient MS, who suffers from a complete left homonymous hemianopia, showed clear above chance manual localisation of ‘unseen’ targets. In addition, target presentations in his blind field led MS, on occasion, to spontaneous responses towards his sighted field. Structural and functional magnetic resonance imaging was conducted to evaluate the magnitude of V1 damage. Results revealed the presence of a calcarine sulcus in both hemispheres, yet his right V1 is reduced, structurally disconnected and shows no fMRI response to visual stimuli. Thus, visual stimulation of his blind field can lead to “action blindsight” and spontaneous antipointing, in absence of a functional right V1. With respect to the antipointing, we suggest that MS may have registered the stimulation and subsequently presumes it must have been in his intact half field.

    Additional information

    video
  • Snijders Blok, L., Kleefstra, T., Venselaar, H., Maas, S., Kroes, H. Y., Lachmeijer, A. M. A., Van Gassen, K. L. I., Firth, H. V., Tomkins, S., Bodek, S., The DDD Study, Õunap, K., Wojcik, M. H., Cunniff, C., Bergstrom, K., Powis, Z., Tang, S., Shinde, D. N., Au, C., Iglesias, A. D., Izumi, K. and 18 moreSnijders Blok, L., Kleefstra, T., Venselaar, H., Maas, S., Kroes, H. Y., Lachmeijer, A. M. A., Van Gassen, K. L. I., Firth, H. V., Tomkins, S., Bodek, S., The DDD Study, Õunap, K., Wojcik, M. H., Cunniff, C., Bergstrom, K., Powis, Z., Tang, S., Shinde, D. N., Au, C., Iglesias, A. D., Izumi, K., Leonard, J., Tayoun, A. A., Baker, S. W., Tartaglia, M., Niceta, M., Dentici, M. L., Okamoto, N., Miyake, N., Matsumoto, N., Vitobello, A., Faivre, L., Philippe, C., Gilissen, C., Wiel, L., Pfundt, R., Derizioti, P., Brunner, H. G., & Fisher, S. E. (2019). De novo variants disturbing the transactivation capacity of POU3F3 cause a characteristic neurodevelopmental disorder. The American Journal of Human Genetics, 105(2), 403-412. doi:10.1016/j.ajhg.2019.06.007.

    Abstract

    POU3F3, also referred to as Brain-1, is a well-known transcription factor involved in the development of the central nervous system, but it has not previously been associated with a neurodevelopmental disorder. Here, we report the identification of 19 individuals with heterozygous POU3F3 disruptions, most of which are de novo variants. All individuals had developmental delays and/or intellectual disability and impairments in speech and language skills. Thirteen individuals had characteristic low-set, prominent, and/or cupped ears. Brain abnormalities were observed in seven of eleven MRI reports. POU3F3 is an intronless gene, insensitive to nonsense-mediated decay, and 13 individuals carried protein-truncating variants. All truncating variants that we tested in cellular models led to aberrant subcellular localization of the encoded protein. Luciferase assays demonstrated negative effects of these alleles on transcriptional activation of a reporter with a FOXP2-derived binding motif. In addition to the loss-of-function variants, five individuals had missense variants that clustered at specific positions within the functional domains, and one small in-frame deletion was identified. Two missense variants showed reduced transactivation capacity in our assays, whereas one variant displayed gain-of-function effects, suggesting a distinct pathophysiological mechanism. In bioluminescence resonance energy transfer (BRET) interaction assays, all the truncated POU3F3 versions that we tested had significantly impaired dimerization capacities, whereas all missense variants showed unaffected dimerization with wild-type POU3F3. Taken together, our identification and functional cell-based analyses of pathogenic variants in POU3F3, coupled with a clinical characterization, implicate disruptions of this gene in a characteristic neurodevelopmental disorder.
  • Snijders, T. M., Petersson, K. M., & Hagoort, P. (2010). Effective connectivity of cortical and subcortical regions during unification of sentence structure. NeuroImage, 52, 1633-1644. doi:10.1016/j.neuroimage.2010.05.035.

    Abstract

    In a recent fMRI study we showed that left posterior middle temporal gyrus (LpMTG) subserves the retrieval of a word's lexical-syntactic properties from the mental lexicon (long-term memory), while left posterior inferior frontal gyrus (LpIFG) is involved in unifying (on-line integration of) this information into a sentence structure (Snijders et al., 2009). In addition, the right IFG, right MTG, and the right striatum were involved in the unification process. Here we report results from a psychophysical interactions (PPI) analysis in which we investigated the effective connectivity between LpIFG and LpMTG during unification, and how the right hemisphere areas and the striatum are functionally connected to the unification network. LpIFG and LpMTG both showed enhanced connectivity during the unification process with a region slightly superior to our previously reported LpMTG. Right IFG better predicted right temporal activity when unification processes were more strongly engaged, just as LpIFG better predicted left temporal activity. Furthermore, the striatum showed enhanced coupling to LpIFG and LpMTG during unification. We conclude that bilateral inferior frontal and posterior temporal regions are functionally connected during sentence-level unification. Cortico-subcortical connectivity patterns suggest cooperation between inferior frontal and striatal regions in performing unification operations on lexical-syntactic representations retrieved from LpMTG.
  • Snowdon, C. T., Pieper, B. A., Boe, C. Y., Cronin, K. A., Kurian, A. V., & Ziegler, T. E. (2010). Variation in oxytocin is related to variation in affiliative behavior in monogamous, pairbonded tamarins. Hormones and Behavior, 58(4), 614-618. doi:10.1016/j.yhbeh.2010.06.014.

    Abstract

    Oxytocin plays an important role in monogamous pairbonded female voles, but not in polygamous voles. Here we examined a socially monogamous cooperatively breeding primate where both sexes share in parental care and territory defense for within species variation in behavior and female and male oxytocin levels in 14 pairs of cotton-top tamarins (Saguinus oedipus). In order to obtain a stable chronic assessment of hormones and behavior, we observed behavior and collected urinary hormonal samples across the tamarins’ 3-week ovulatory cycle. We found similar levels of urinary oxytocin in both sexes. However, basal urinary oxytocin levels varied 10-fold across pairs and pair-mates displayed similar oxytocin levels. Affiliative behavior (contact, grooming, sex) also varied greatly across the sample and explained more than half the variance in pair oxytocin levels. The variables accounting for variation in oxytocin levels differed by sex. Mutual contact and grooming explained most of the variance in female oxytocin levels, whereas sexual behavior explained most of the variance in male oxytocin levels. The initiation of contact by males and solicitation of sex by females were related to increased levels of oxytocin in both. This study demonstrates within-species variation in oxytocin that is directly related to levels of affiliative and sexual behavior. However, different behavioral mechanisms influence oxytocin levels in males and females and a strong pair relationship (as indexed by high levels of oxytocin) may require the activation of appropriate mechanisms for both sexes.
  • Soares, S. M. P., Ong, G., Abutalebi, J., Del Maschio, N., Sewell, D., & Weekes, B. (2019). A diffusion model approach to analyzing performance on the flanker task: the role of the DLPFC. Bilingualism: Language and Cognition, 22(5), 1194-1208. doi:10.1017/S1366728918000974.

    Abstract

    The anterior cingulate cortex (ACC) and the dorsolateral prefrontal cortex (DLPFC) are involved in conflict detection and
    conflict resolution, respectively. Here, we investigate how lifelong bilingualism induces neuroplasticity to these structures by
    employing a novel analysis of behavioural performance. We correlated grey matter volume (GMV) in seniors reported by
    Abutalebi et al. (2015) with behavioral Flanker task performance fitted using the diffusion model (Ratcliff, 1978). As
    predicted, we observed significant correlations between GMV in the DLPFC and Flanker performance. However, for
    monolinguals the non-decision time parameter was significantly correlated with GMV in the left DLPFC, whereas for
    bilinguals the correlation was significant in the right DLPFC. We also found a significant correlation between age and GMV
    in left DLPFC and the non-decision time parameter for the conflict effect for monolinguals only.
    We submit that this is due to cumulative demands on cognitive control over a lifetime of bilingual language processing
  • Solberg Økland, H., Todorović, A., Lüttke, C. S., McQueen, J. M., & De Lange, F. P. (2019). Combined predictive effects of sentential and visual constraints in early audiovisual speech processing. Scientific Reports, 9: 7870. doi:10.1038/s41598-019-44311-2.

    Abstract

    In language comprehension, a variety of contextual cues act in unison to render upcoming words more or less predictable. As a sentence unfolds, we use prior context (sentential constraints) to predict what the next words might be. Additionally, in a conversation, we can predict upcoming sounds through observing the mouth movements of a speaker (visual constraints). In electrophysiological studies, effects of visual constraints have typically been observed early in language processing, while effects of sentential constraints have typically been observed later. We hypothesized that the visual and the sentential constraints might feed into the same predictive process such that effects of sentential constraints might also be detectable early in language processing through modulations of the early effects of visual salience. We presented participants with audiovisual speech while recording their brain activity with magnetoencephalography. Participants saw videos of a person saying sentences where the last word was either sententially constrained or not, and began with a salient or non-salient mouth movement. We found that sentential constraints indeed exerted an early (N1) influence on language processing. Sentential modulations of the N1 visual predictability effect were visible in brain areas associated with semantic processing, and were differently expressed in the two hemispheres. In the left hemisphere, visual and sentential constraints jointly suppressed the auditory evoked field, while the right hemisphere was sensitive to visual constraints only in the absence of strong sentential constraints. These results suggest that sentential and visual constraints can jointly influence even very early stages of audiovisual speech comprehension.
  • Sollis, E., Deriziotis, P., Saitsu, H., Miyake, N., Matsumoto, N., J.V.Hoffer, M. J. V., Ruivenkamp, C. A., Alders, M., Okamoto, N., Bijlsma, E. K., Plomp, A. S., & Fisher, S. E. (2017). Equivalent missense variant in the FOXP2 and FOXP1 transcription factors causes distinct neurodevelopmental disorders. Human Mutation, 38(11), 1542-1554. doi:10.1002/humu.23303.

    Abstract

    The closely related paralogues FOXP2 and FOXP1 encode transcription factors with shared functions in the development of many tissues, including the brain. However, while mutations in FOXP2 lead to a speech/language disorder characterized by childhood apraxia of speech (CAS), the clinical profile of FOXP1 variants includes a broader neurodevelopmental phenotype with global developmental delay, intellectual disability and speech/language impairment. Using clinical whole-exome sequencing, we report an identical de novo missense FOXP1 variant identified in three unrelated patients. The variant, p.R514H, is located in the forkhead-box DNA-binding domain and is equivalent to the well-studied p.R553H FOXP2 variant that co-segregates with CAS in a large UK family. We present here for the first time a direct comparison of the molecular and clinical consequences of the same mutation affecting the equivalent residue in FOXP1 and FOXP2. Detailed functional characterization of the two variants in cell model systems revealed very similar molecular consequences, including aberrant subcellular localization, disruption of transcription factor activity and deleterious effects on protein interactions. Nonetheless, clinical manifestations were broader and more severe in the three cases carrying the p.R514H FOXP1 variant than in individuals with the p.R553H variant related to CAS, highlighting divergent roles of FOXP2 and FOXP1 in neurodevelopment.

    Additional information

    humu23303-sup-0001-SuppMat.pdf
  • Soutschek, A., Burke, C. J., Beharelle, A. R., Schreiber, R., Weber, S. C., Karipidis, I. I., Ten Velden, J., Weber, B., Haker, H., Kalenscher, T., & Tobler, P. N. (2017). The dopaminergic reward system underpins gender differences in social preferences. Nature Human Behaviour, 1, 819-827. doi:10.1038/s41562-017-0226-y.

    Abstract

    Women are known to have stronger prosocial preferences than men, but it remains an open question as to how these behavioural differences arise from differences in brain functioning. Here, we provide a neurobiological account for the hypothesized gender difference. In a pharmacological study and an independent neuroimaging study, we tested the hypothesis that the neural reward system encodes the value of sharing money with others more strongly in women than in men. In the pharmacological study, we reduced receptor type-specific actions of dopamine, a neurotransmitter related to reward processing, which resulted in more selfish decisions in women and more prosocial decisions in men. Converging findings from an independent neuroimaging study revealed gender-related activity in neural reward circuits during prosocial decisions. Thus, the neural reward system appears to be more sensitive to prosocial rewards in women than in men, providing a neurobiological account for why women often behave more prosocially than men.

    A large body of evidence suggests that women are often more prosocial (for example, generous, altruistic and inequality averse) than men, at least when other factors such as reputation and strategic considerations are excluded1,2,3. This dissociation could result from cultural expectations and gender stereotypes, because in Western societies women are more strongly expected to be prosocial4,5,6 and sensitive to variations in social context than men1. It remains an open question, however, whether and how on a neurobiological level the social preferences of women and men arise from differences in brain functioning. The assumption of gender differences in social preferences predicts that the neural reward system’s sensitivity to prosocial and selfish rewards should differ between women and men. Specifically, the hypothesis would be that the neural reward system is more sensitive to prosocial than selfish rewards in women and more sensitive to selfish than prosocial rewards in men. The goal of the current study was to test in two independent experiments for the hypothesized gender differences on both a pharmacological and a haemodynamic level. In particular, we examined the functions of the neurotransmitter dopamine using a dopamine receptor antagonist, and the role of the striatum (a brain region strongly innervated by dopamine neurons) during social decision-making in women and men using neuroimaging.

    The neurotransmitter dopamine is thought to play a key role in neural reward processing7,8. Recent evidence suggests that dopaminergic activity is sensitive not only to rewards for oneself but to rewards for others as well9. The assumption that dopamine is sensitive to both self- and other-related outcomes is consistent with the finding that the striatum shows activation for both selfish and shared rewards10,11,12,13,14,15. The dopaminergic response may represent a net signal encoding the difference between the value of preferred and unpreferred rewards8. Regarding the hypothesized gender differences in social preferences, this account makes the following predictions. If women prefer shared (prosocial) outcomes2, women’s dopaminergic signals to shared rewards will be stronger than to non-shared (selfish) rewards, so reducing dopaminergic activity should bias women to make more selfish decisions. In line with this hypothesis, a functional imaging study reported enhanced striatal activation in female participants during charitable donations11. In contrast, if men prefer selfish over prosocial rewards, dopaminergic activity should be enhanced to selfish compared to prosocial rewards. In line with this view, upregulating dopaminergic activity in a sample of exclusively male participants increased selfish behaviour in a bargaining game16. Thus, contrary to the hypothesized effect in women, reducing dopaminergic neurotransmission should render men more prosocial. Taken together, the current study tested the following three predictions: we expected the dopaminergic reward system (1) to be more sensitive to prosocial than selfish rewards in women and (2) to be more sensitive to selfish than prosocial rewards in men. As a consequence of these two predictions, we also predicted (3) dopaminoceptive regions such as the striatum to show stronger activation to prosocial relative to selfish rewards in women than in men.

    To test these predictions, we conducted a pharmacological study in which we reduced dopaminergic neurotransmission with amisulpride. Amisulpride is a dopamine antagonist that is highly specific for dopaminergic D2/D3 receptors17. After receiving amisulpride or placebo, participants performed an interpersonal decision task18,19,20, in which they made choices between a monetary reward only for themselves (selfish reward option) and sharing money with others (prosocial reward option). We expected that blocking dopaminergic neurotransmission with amisulpride, relative to placebo, would result in fewer prosocial choices in women and more prosocial choices in men. To investigate whether potential gender-related effects of dopamine are selective for social decision-making, we also tested the effects of amisulpride on time preferences in a non-social control task that was matched to the interpersonal decision task in terms of choice structure.

    In addition, because dopaminergic neurotransmission plays a crucial role in brain regions involved in value processing, such as the striatum21, a gender-related role of dopaminergic activity for social decision-making should also be reflected by dissociable activity patterns in the striatum. Therefore, to further test our hypothesis, we investigated the neural correlates of social decision-making in a functional imaging study. In line with our predictions for the pharmacological study, we expected to find stronger striatum activity during prosocial relative to selfish decisions in women, whereas men should show enhanced activity in the striatum for selfish relative to prosocial choices.

    Additional information

    Supplementary Information
  • Spada, D., Verga, L., Iadanza, A., Tettamanti, M., & Perani, D. (2014). The auditory scene: An fMRI study on melody and accompaniment in professional pianists. NeuroImage, 102(2), 764-775. doi:10.1016/j.neuroimage.2014.08.036.

    Abstract

    The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes.
  • Speed, L. J., & Majid, A. (2017). Dutch modality exclusivity norms: Simulating perceptual modality in space. Behavior Research Methods, 49(6), 2204-2218. doi:10.3758/s13428-017-0852-3.

    Abstract

    Perceptual information is important for the meaning of nouns. We present modality exclusivity norms for 485 Dutch nouns rated on visual, auditory, haptic, gustatory, and olfactory associations. We found these nouns are highly multimodal. They were rated most dominant in vision, and least in olfaction. A factor analysis identified two main dimensions: one loaded strongly on olfaction and gustation (reflecting joint involvement in flavor), and a second loaded strongly on vision and touch (reflecting joint involvement in manipulable objects). In a second study, we validated the ratings with similarity judgments. As expected, words from the same dominant modality were rated more similar than words from different dominant modalities; but – more importantly – this effect was enhanced when word pairs had high modality strength ratings. We further demonstrated the utility of our ratings by investigating whether perceptual modalities are differentially experienced in space, in a third study. Nouns were categorized into their dominant modality and used in a lexical decision experiment where the spatial position of words was either in proximal or distal space. We found words dominant in olfaction were processed faster in proximal than distal space compared to the other modalities, suggesting olfactory information is mentally simulated as “close” to the body. Finally, we collected ratings of emotion (valence, dominance, and arousal) to assess its role in perceptual space simulation, but the valence did not explain the data. So, words are processed differently depending on their perceptual associations, and strength of association is captured by modality exclusivity ratings.

    Additional information

    13428_2017_852_MOESM1_ESM.xlsx
  • Speed, L., & Majid, A. (2019). Linguistic features of fragrances: The role of grammatical gender and gender associations. Attention, Perception & Psychophysics, 81(6), 2063-2077. doi:10.3758/s13414-019-01729-0.

    Abstract

    Odors are often difficult to identify and name, which leaves them vulnerable to the influence of language. The present study tests the boundaries of the effect of language on odor cognition by examining the effect of grammatical gender. We presented participants with male and female fragrances paired with descriptions of masculine or feminine grammatical gender. In Experiment 1 we found that memory for fragrances was enhanced when the grammatical gender of a fragrance description matched the gender of the fragrance. In Experiment 2 we found memory for fragrances was affected by both grammatical gender and gender associations in fragrance descriptions – recognition memory for odors was higher when the gender was incongruent. In sum, we demonstrated that even subtle aspects of language can affect odor cognition.

    Additional information

    Supplementary material
  • Spilková, H., Brenner, D., Öttl, A., Vondřička, P., Van Dommelen, W., & Ernestus, M. (2010). The Kachna L1/L2 picture replication corpus. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 2432-2436). Paris: European Language Resources Association (ELRA).

    Abstract

    This paper presents the Kachna corpus of spontaneous speech, in which ten Czech and ten Norwegian speakers were recorded both in their native language and in English. The dialogues are elicited using a picture replication task that requires active cooperation and interaction of speakers by asking them to produce a drawing as close to the original as possible. The corpus is appropriate for the study of interactional features and speech reduction phenomena across native and second languages. The combination of productions in non-native English and in speakers’ native language is advantageous for investigation of L2 issues while providing a L1 behaviour reference from all the speakers. The corpus consists of 20 dialogues comprising 12 hours 53 minutes of recording, and was collected in 2008. Preparation of the transcriptions, including a manual orthographic transcription and an automatically generated phonetic transcription, is currently in progress. The phonetic transcriptions are automatically generated by aligning acoustic models with the speech signal on the basis of the orthographic transcriptions and a dictionary of pronunciation variants compiled for the relevant language. Upon completion the corpus will be made available via the European Language Resources Association (ELRA).
  • Stanojevic, M., & Alhama, R. G. (2017). Neural discontinuous constituency parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (pp. 1666-1676). Association for Computational Linguistics.

    Abstract

    One of the most pressing issues in dis-
    continuous constituency transition-based
    parsing is that the relevant information for
    parsing decisions could be located in any
    part of the stack or the buffer. In this pa-
    per, we propose a solution to this prob-
    lem by replacing the structured percep-
    tron model with a recursive neural model
    that computes a global representation of
    the configuration, therefore allowing even
    the most remote parts of the configura-
    tion to influence the parsing decisions. We
    also provide a detailed analysis of how
    this representation should be built out of
    sub-representations of its core elements
    (words, trees and stack). Additionally, we
    investigate how different types of swap or-
    acles influence the results. Our model is
    the first neural discontinuous constituency
    parser, and it outperforms all the previ-
    ously published models on three out of
    four datasets while on the fourth it obtains
    second place by a tiny difference.

    Additional information

    http://aclweb.org/anthology/D17-1174
  • Staum Casasanto, L., Jasmin, K., & Casasanto, D. (2010). Virtually accommodating: Speech rate accommodation to a virtual interlocutor. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 127-132). Austin, TX: Cognitive Science Society.

    Abstract

    Why do people accommodate to each other’s linguistic behavior? Studies of natural interactions (Giles, Taylor & Bourhis, 1973) suggest that speakers accommodate to achieve interactional goals, influencing what their interlocutor thinks or feels about them. But is this the only reason speakers accommodate? In real-world conversations, interactional motivations are ubiquitous, making it difficult to assess the extent to which they drive accommodation. Do speakers still accommodate even when interactional goals cannot be achieved, for instance, when their interlocutor cannot interpret their accommodation behavior? To find out, we asked participants to enter an immersive virtual reality (VR) environment and to converse with a virtual interlocutor. Participants accommodated to the speech rate of their virtual interlocutor even though he could not interpret their linguistic behavior, and thus accommodation could not possibly help them to achieve interactional goals. Results show that accommodation does not require explicit interactional goals, and suggest other social motivations for accommodation.
  • Stehouwer, H., & van Zaanen, M. (2010). Enhanced suffix arrays as language models: Virtual k-testable languages. In J. M. Sempere, & P. García (Eds.), Grammatical inference: Theoretical results and applications 10th International Colloquium, ICGI 2010, Valencia, Spain, September 13-16, 2010. Proceedings (pp. 305-308). Berlin: Springer.

    Abstract

    In this article, we propose the use of suffix arrays to efficiently implement n-gram language models with practically unlimited size n. This approach, which is used with synchronous back-off, allows us to distinguish between alternative sequences using large contexts. We also show that we can build this kind of models with additional information for each symbol, such as part-of-speech tags and dependency information. The approach can also be viewed as a collection of virtual k-testable automata. Once built, we can directly access the results of any k-testable automaton generated from the input training data. Synchronous back- off automatically identies the k-testable automaton with the largest feasible k. We have used this approach in several classification tasks.
  • Stehouwer, H., & Van Zaanen, M. (2010). Finding patterns in strings using suffix arrays. In M. Ganzha, & M. Paprzycki (Eds.), Proceedings of the International Multiconference on Computer Science and Information Technology, October 18–20, 2010. Wisła, Poland (pp. 505-511). IEEE.

    Abstract

    Finding regularities in large data sets requires implementations of systems that are efficient in both time and space requirements. Here, we describe a newly developed system that exploits the internal structure of the enhanced suffixarray to find significant patterns in a large collection of sequences. The system searches exhaustively for all significantly compressing patterns where patterns may consist of symbols and skips or wildcards. We demonstrate a possible application of the system by detecting interesting patterns in a Dutch and an English corpus.
  • Stehouwer, H., & van Zaanen, M. (2010). Using suffix arrays as language models: Scaling the n-gram. In Proceedings of the 22st Benelux Conference on Artificial Intelligence (BNAIC 2010), October 25-26, 2010.

    Abstract

    In this article, we propose the use of suffix arrays to implement n-gram language models with practically unlimited size n. These unbounded n-grams are called 1-grams. This approach allows us to use large contexts efficiently to distinguish between different alternative sequences while applying synchronous back-off. From a practical point of view, the approach has been applied within the context of spelling confusibles, verb and noun agreement and prenominal adjective ordering. These initial experiments show promising results and we relate the performance to the size of the n-grams used for disambiguation.
  • Stergiakouli, E., Martin, J., Hamshere, M. L., Heron, J., St Pourcain, B., Timpson, N. J., Thapar, A., & Smith, G. D. (2017). Association between polygenic risk scores for attention-deficit hyperactivity disorder and educational and cognitive outcomes in the general population. International Journal of Epidemiology, 46(2), 421-428. doi:10.1093/ije/dyw216.

    Abstract

    Background: Children with a diagnosis of attention-deficit hyperactivity disorder (ADHD) have lower cognitive ability and are at risk of adverse educational outcomes; ADHD genetic risks have been found to predict childhood cognitive ability and other neurodevelopmental traits in the general population; thus genetic risks might plausibly also contribute to cognitive ability later in development and to educational underachievement.

    Methods: We generated ADHD polygenic risk scores in the Avon Longitudinal Study of Parents and Children participants (maximum N: 6928 children and 7280 mothers) based on the results of a discovery clinical sample, a genome-wide association study of 727 cases with ADHD diagnosis and 5081 controls. We tested if ADHD polygenic risk scores were associated with educational outcomes and IQ in adolescents and their mothers.

    Results: High ADHD polygenic scores in adolescents were associated with worse educational outcomes at Key Stage 3 [national tests conducted at age 13–14 years; β = −1.4 (−2.0 to −0.8), P = 2.3 × 10−6), at General Certificate of Secondary Education exams at age 15–16 years (β = −4.0 (−6.1 to −1.9), P = 1.8 × 10−4], reduced odds of sitting Key Stage 5 examinations at age 16–18 years [odds ratio (OR) = 0.90 (0.88 to 0.97), P = 0.001] and lower IQ scores at age 15.5 [β = −0.8 (−1.2 to −0.4), P = 2.4 × 10−4]. Moreover, maternal ADHD polygenic scores were associated with lower maternal educational achievement [β = −0.09 (−0.10 to −0.06), P = 0.005] and lower maternal IQ [β = −0.6 (−1.2 to −0.1), P = 0.03].

    Conclusions: ADHD diagnosis risk alleles impact on functional outcomes in two generations (mother and child) and likely have intergenerational environmental effects.
  • Stergiakouli, E., Gaillard, R., Tavaré, J. M., Balthasar, N., Loos, R. J., Taal, H. R., Evans, D. M., Rivadeneira, F., St Pourcain, B., Uitterlinden, A. G., Kemp, J. P., Hofman, A., Ring, S. M., Cole, T. J., Jaddoe, V. W. V., Davey Smith, G., & Timpson, N. J. (2014). Genome-wide association study of height-adjusted BMI in childhood identifies functional variant in ADCY3. Obesity, 22(10), 2252-2259. doi:10.1002/oby.20840.

    Abstract

    OBJECTIVE: Genome-wide association studies (GWAS) of BMI are mostly undertaken under the assumption that "kg/m(2) " is an index of weight fully adjusted for height, but in general this is not true. The aim here was to assess the contribution of common genetic variation to a adjusted version of that phenotype which appropriately accounts for covariation in height in children. METHODS: A GWAS of height-adjusted BMI (BMI[x] = weight/height(x) ), calculated to be uncorrelated with height, in 5809 participants (mean age 9.9 years) from the Avon Longitudinal Study of Parents and Children (ALSPAC) was performed. RESULTS: GWAS based on BMI[x] yielded marked differences in genomewide results profile. SNPs in ADCY3 (adenylate cyclase 3) were associated at genome-wide significance level (rs11676272 (0.28 kg/m(3.1) change per allele G (0.19, 0.38), P = 6 × 10(-9) ). In contrast, they showed marginal evidence of association with conventional BMI [rs11676272 (0.25 kg/m(2) (0.15, 0.35), P = 6 × 10(-7) )]. Results were replicated in an independent sample, the Generation R study. CONCLUSIONS: Analysis of BMI[x] showed differences to that of conventional BMI. The association signal at ADCY3 appeared to be driven by a missense variant and it was strongly correlated with expression of this gene. Our work highlights the importance of well understood phenotype use (and the danger of convention) in characterising genetic contributions to complex traits.

    Additional information

    oby20840-sup-0001-suppinfo.docx
  • Stergiakouli, E., Smith, G. D., Martin, J., Skuse, D. H., Viechtbauer, W., Ring, S. M., Ronald, A., Evans, D. E., Fisher, S. E., Thapar, A., & St Pourcain, B. (2017). Shared genetic influences between dimensional ASD and ADHD symptoms during child and adolescent development. Molecular Autism, 8: 18. doi:10.1186/s13229-017-0131-2.

    Abstract

    Background: Shared genetic influences between attention-deficit/hyperactivity disorder (ADHD) symptoms and
    autism spectrum disorder (ASD) symptoms have been reported. Cross-trait genetic relationships are, however,
    subject to dynamic changes during development. We investigated the continuity of genetic overlap between ASD
    and ADHD symptoms in a general population sample during childhood and adolescence. We also studied uni- and
    cross-dimensional trait-disorder links with respect to genetic ADHD and ASD risk.
    Methods: Social-communication difficulties (N ≤ 5551, Social and Communication Disorders Checklist, SCDC) and
    combined hyperactive-impulsive/inattentive ADHD symptoms (N ≤ 5678, Strengths and Difficulties Questionnaire,
    SDQ-ADHD) were repeatedly measured in a UK birth cohort (ALSPAC, age 7 to 17 years). Genome-wide summary
    statistics on clinical ASD (5305 cases; 5305 pseudo-controls) and ADHD (4163 cases; 12,040 controls/pseudo-controls)
    were available from the Psychiatric Genomics Consortium. Genetic trait variances and genetic overlap between
    phenotypes were estimated using genome-wide data.
    Results: In the general population, genetic influences for SCDC and SDQ-ADHD scores were shared throughout
    development. Genetic correlations across traits reached a similar strength and magnitude (cross-trait rg ≤ 1,
    pmin = 3 × 10−4) as those between repeated measures of the same trait (within-trait rg ≤ 0.94, pmin = 7 × 10−4).
    Shared genetic influences between traits, especially during later adolescence, may implicate variants in K-RAS signalling
    upregulated genes (p-meta = 6.4 × 10−4).
    Uni-dimensionally, each population-based trait mapped to the expected behavioural continuum: risk-increasing alleles
    for clinical ADHD were persistently associated with SDQ-ADHD scores throughout development (marginal regression
    R2 = 0.084%). An age-specific genetic overlap between clinical ASD and social-communication difficulties during
    childhood was also shown, as per previous reports. Cross-dimensionally, however, neither SCDC nor SDQ-ADHD scores
    were linked to genetic risk for disorder.
    Conclusions: In the general population, genetic aetiologies between social-communication difficulties and ADHD
    symptoms are shared throughout child and adolescent development and may implicate similar biological pathways
    that co-vary during development. Within both the ASD and the ADHD dimension, population-based traits are also linked
    to clinical disorder, although much larger clinical discovery samples are required to reliably detect cross-dimensional
    trait-disorder relationships.
  • Stine-Morrow, E., Payne, B., Roberts, B., Kramer, A., Morrow, D., Payne, L., Hill, P., Jackson, J., Gao, X., Noh, S., Janke, M., & Parisi, J. (2014). Training versus engagement as paths to cognitive enrichment with aging. Psychology and Aging, 29, 891-906. doi:10.1037/a0038244.

    Abstract

    While a training model of cognitive intervention targets the improvement of particular skills through instruction and practice, an engagement model is based on the idea that being embedded in an intellectually and socially complex environment can impact cognition, perhaps even broadly, without explicit instruction. We contrasted these 2 models of cognitive enrichment by randomly assigning healthy older adults to a home-based inductive reasoning training program, a team-based competitive program in creative problem solving, or a wait-list control. As predicted, those in the training condition showed selective improvement in inductive reasoning. Those in the engagement condition, on the other hand, showed selective improvement in divergent thinking, a key ability exercised in creative problem solving. On average, then, both groups appeared to show ability-specific effects. However, moderators of change differed somewhat for those in the engagement and training interventions. Generally, those who started either intervention with a more positive cognitive profile showed more cognitive growth, suggesting that cognitive resources enabled individuals to take advantage of environmental enrichment. Only in the engagement condition did initial levels of openness and social network size moderate intervention effects on cognition, suggesting that comfort with novelty and an ability to manage social resources may be additional factors contributing to the capacity to take advantage of the environmental complexity associated with engagement. Collectively, these findings suggest that training and engagement models may offer alternative routes to cognitive resilience in late life

    Files private

    Request files
  • Stivers, T. (2004). Potilaan vastarinta: Keino vaikuttaa lääkärin hoitopäätökseen. Sosiaalilääketieteellinen Aikakauslehti, 41, 199-213.
  • Stivers, T., & Rossano, F. (2010). A scalar view of response relevance. Research on Language and Social Interaction, 43, 49-56. doi:10.1080/08351810903471381.
  • Stivers, T. (2010). An overview of the question-response system in American English conversation. Journal of Pragmatics, 42, 2772-2781. doi:10.1016/j.pragma.2010.04.011.

    Abstract

    This article, part of a 10 language comparative project on question–response sequences, discusses these sequences in American English conversation. The data are video-taped spontaneous naturally occurring conversations involving two to five adults. Relying on these data I document the basic distributional patterns of types of questions asked (polar, Q-word or alternative as well as sub-types), types of social actions implemented by these questions (e.g., repair initiations, requests for confirmation, offers or requests for information), and types of responses (e.g., repetitional answers or yes/no tokens). I show that declarative questions are used more commonly in conversation than would be suspected by traditional grammars of English and questions are used for a wider range of functions than grammars would suggest. Finally, this article offers distributional support for the idea that responses that are better “fitted” with the question are preferred.
  • Stivers, T., & Enfield, N. J. (2010). A coding scheme for question-response sequences in conversation. Journal of Pragmatics, 42, 2620-2626. doi:10.1016/j.pragma.2010.04.002.

    Abstract

    no abstract is available for this article
  • Stivers, T. (2004). "No no no" and other types of multiple sayings in social interaction. Human Communication Research, 30(2), 260-293. doi:10.1111/j.1468-2958.2004.tb00733.x.

    Abstract

    Relying on the methodology of conversation analysis, this article examines a practice in ordinary conversation characterized by the resaying of a word, phrase, or sentence. The article shows that multiple sayings such as "No no no" or "Alright alright alright" are systematic in both their positioning relative to the interlocutor's talk and in their function. Specifically, the findings are that multiple sayings are a resource speakers have to display that their turn is addressing an in progress course of action rather than only the just prior utterance. Speakers of multiple sayings communicate their stance that the prior speaker has persisted unnecessarily in the prior course of action and should properly halt course of action.
  • Stivers, T., & Rossano, F. (2010). Mobilizing response. Research on Language and Social Interaction, 43, 3-31. doi:10.1080/08351810903471258.

    Abstract

    A fundamental puzzle in the organization of social interaction concerns how one individual elicits a response from another. This article asks what it is about some sequentially initial turns that reliably mobilizes a coparticipant to respond and under what circumstances individuals are accountable for producing a response. Whereas a linguistic approach suggests that this is what oquestionso (more generally) and interrogativity (more narrowly) are for, a sociological approach to social interaction suggests that the social action a person is implementing mobilizes a recipient's response. We find that although both theories have merit, neither adequately solves the puzzle. We argue instead that different actions mobilize response to different degrees. Speakers then design their turns to perform actions, and with particular response-mobilizing features of turn-design speakers can hold recipients more accountable for responding or not. This model of response relevance allows sequential position, action, and turn design to each contribute to response relevance.
  • Stivers, T., Enfield, N. J., & Levinson, S. C. (Eds.). (2010). Question-response sequences in conversation across ten languages [Special Issue]. Journal of Pragmatics, 42(10). doi:10.1016/j.pragma.2010.04.001.
  • Stivers, T., Enfield, N. J., & Levinson, S. C. (2010). Question-response sequences in conversation across ten languages: An introduction. Journal of Pragmatics, 42, 2615-2619. doi:10.1016/j.pragma.2010.04.001.
  • Stivers, T. (1998). Prediagnostic commentary in veterinarian-client interaction. Research on Language and Social Interaction, 31(2), 241-277. doi:10.1207/s15327973rlsi3102_4.
  • Stivers, T., & Hayashi, M. (2010). Transformative answers: One way to resist a question's constraints. Language in Society, 39, 1-25. doi:10.1017/S0047404509990637.

    Abstract

    A number of Conversation Analytic studies have documented that question recipients have a variety of ways to push against the constraints that questions impose on them. This article explores the concept of transformative answers – answers through which question recipients retroactively adjust the question posed to them. Two main sorts of adjustments are discussed: question term transformations and question agenda transformations. It is shown that the operations through which interactants implement term transformations are different from the operations through which they implement agenda transformations. Moreover, term-transforming answers resist only the question’s design, while agenda-transforming answers effectively resist both design and agenda, thus implying that agenda-transforming answers resist more strongly than design-transforming answers. The implications of these different sorts of transformations for alignment and affiliation are then explored.
  • Stoehr, A., Benders, T., Van Hell, J. G., & Fikkert, P. (2019). Bilingual preschoolers’ speech is associated with non-native maternal language input. Language Learning and Development, 15(1), 75-100. doi:10.1080/15475441.2018.1533473.

    Abstract

    Bilingual children are often exposed to non-native speech through their parents. Yet, little is known about the relation between bilingual preschoolers’ speech production and their speech input. The present study investigated the production of voice onset time (VOT) by Dutch-German bilingual preschoolers and their sequential bilingual mothers. The findings reveal an association between maternal VOT and bilingual children’s VOT in the heritage language German as well as in the majority language Dutch. By contrast, no input-production association was observed in the VOT production of monolingual German-speaking children and monolingual Dutch-speaking children. The results of this study provide the first empirical evidence that non-native and attrited maternal speech contributes to the often-observed linguistic differences between bilingual children and their monolingual peers.
  • Stoehr, A., Benders, T., Van Hell, J. G., & Fikkert, P. (2017). Second language attainment and first language attrition: The case of VOT in immersed Dutch–German late bilinguals. Second Language Research, 33(4), 483-518. doi:10.1177/0267658317704261.

    Abstract

    Speech of late bilinguals has frequently been described in terms of cross-linguistic influence (CLI) from the native language (L1) to the second language (L2), but CLI from the L2 to the L1 has received relatively little attention. This article addresses L2 attainment and L1 attrition in voicing systems through measures of voice onset time (VOT) in two groups of Dutch–German late bilinguals in the Netherlands. One group comprises native speakers of Dutch and the other group comprises native speakers of German, and the two groups further differ in their degree of L2 immersion. The L1-German–L2-Dutch bilinguals (N = 23) are exposed to their L2 at home and outside the home, and the L1-Dutch–L2-German bilinguals (N = 18) are only exposed to their L2 at home. We tested L2 attainment by comparing the bilinguals’ L2 to the other bilinguals’ L1, and L1 attrition by comparing the bilinguals’ L1 to Dutch monolinguals (N = 29) and German monolinguals (N = 27). Our findings indicate that complete L2 immersion may be advantageous in L2 acquisition, but at the same time it may cause L1 phonetic attrition. We discuss how the results match the predictions made by Flege’s Speech Learning Model and explore how far bilinguals’ success in acquiring L2 VOT and maintaining L1 VOT depends on the immersion context, articulatory constraints and the risk of sounding foreign accented.
  • Stolk, A., Noordzij, M. L., Verhagen, L., Volman, I., Schoffelen, J.-M., Oostenveld, R., Hagoort, P., & Toni, I. (2014). Cerebral coherence between communicators marks the emergence of meaning. Proceedings of the National Academy of Sciences of the United States of America, 111, 18183-18188. doi:10.1073/pnas.1414886111.

    Abstract

    How can we understand each other during communicative interactions? An influential suggestion holds that communicators are primed by each other’s behaviors, with associative mechanisms automatically coordinating the production of communicative signals and the comprehension of their meanings. An alternative suggestion posits that mutual understanding requires shared conceptualizations of a signal’s use, i.e., “conceptual pacts” that are abstracted away from specific experiences. Both accounts predict coherent neural dynamics across communicators, aligned either to the occurrence of a signal or to the dynamics of conceptual pacts. Using coherence spectral-density analysis of cerebral activity simultaneously measured in pairs of communicators, this study shows that establishing mutual understanding of novel signals synchronizes cerebral dynamics across communicators’ right temporal lobes. This interpersonal cerebral coherence occurred only within pairs with a shared communicative history, and at temporal scales independent from signals’ occurrences. These findings favor the notion that meaning emerges from shared conceptualizations of a signal’s use.
  • Ye, Z., Stolk, A., Toni, I., & Hagoort, P. (2017). Oxytocin modulates semantic integration in speech comprehension. Journal of Cognitive Neuroscience, 29, 267-276. doi:10.1162/jocn_a_01044.

    Abstract

    Listeners interpret utterances by integrating information from multiple sources including word level semantics and world knowledge. When the semantics of an expression is inconsistent with his or her knowledge about the world, the listener may have to search through the conceptual space for alternative possible world scenarios that can make the expression more acceptable. Such cognitive exploration requires considerable computational resources and might depend on motivational factors. This study explores whether and how oxytocin, a neuropeptide known to influence socialmotivation by reducing social anxiety and enhancing affiliative tendencies, can modulate the integration of world knowledge and sentence meanings. The study used a betweenparticipant double-blind randomized placebo-controlled design. Semantic integration, indexed with magnetoencephalography through the N400m marker, was quantified while 45 healthymale participants listened to sentences that were either congruent or incongruent with facts of the world, after receiving intranasally delivered oxytocin or placebo. Compared with congruent sentences, world knowledge incongruent sentences elicited a stronger N400m signal from the left inferior frontal and anterior temporal regions and medial pFC (the N400m effect) in the placebo group. Oxytocin administration significantly attenuated the N400meffect at both sensor and cortical source levels throughout the experiment, in a state-like manner. Additional electrophysiological markers suggest that the absence of the N400m effect in the oxytocin group is unlikely due to the lack of early sensory or semantic processing or a general downregulation of attention. These findings suggest that oxytocin drives listeners to resolve challenges of semantic integration, possibly by promoting the cognitive exploration of alternative possible world scenarios.
  • Stolk, A., Noordzij, M. L., Volman, I., Verhagen, L., Overeem, S., van Elswijk, G., Bloem, B., Hagoort, P., & Toni, I. (2014). Understanding communicative actions: A repetitive TMS study. Cortex, 51, 25-34. doi:10.1016/j.cortex.2013.10.005.

    Abstract

    Despite the ambiguity inherent in human communication, people are remarkably efficient in establishing mutual understanding. Studying how people communicate in novel settings provides a window into the mechanisms supporting the human competence to rapidly generate and understand novel shared symbols, a fundamental property of human communication. Previous work indicates that the right posterior superior temporal sulcus (pSTS) is involved when people understand the intended meaning of novel communicative actions. Here, we set out to test whether normal functioning of this cerebral structure is required for understanding novel communicative actions using inhibitory low-frequency repetitive transcranial magnetic stimulation (rTMS). A factorial experimental design contrasted two tightly matched stimulation sites (right pSTS vs. left MT+, i.e. a contiguous homotopic task-relevant region) and tasks (a communicative task vs. a visual tracking task that used the same sequences of stimuli). Overall task performance was not affected by rTMS, whereas changes in task performance over time were disrupted according to TMS site and task combinations. Namely, rTMS over pSTS led to a diminished ability to improve action understanding on the basis of recent communicative history, while rTMS over MT+ perturbed improvement in visual tracking over trials. These findings qualify the contributions of the right pSTS to human communicative abilities, showing that this region might be necessary for incorporating previous knowledge, accumulated during interactions with a communicative partner, to constrain the inferential process that leads to action understanding.
  • Sumer, B., Grabitz, C., & Küntay, A. (2017). Early produced signs are iconic: Evidence from Turkish Sign Language. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 3273-3278). Austin, TX: Cognitive Science Society.

    Abstract

    Motivated form-meaning mappings are pervasive in sign languages, and iconicity has recently been shown to facilitate sign learning from early on. This study investigated the role of iconicity for language acquisition in Turkish Sign Language (TID). Participants were 43 signing children (aged 10 to 45 months) of deaf parents. Sign production ability was recorded using the adapted version of MacArthur Bates Communicative Developmental Inventory (CDI) consisting of 500 items for TID. Iconicity and familiarity ratings for a subset of 104 signs were available. Our results revealed that the iconicity of a sign was positively correlated with the percentage of children producing a sign and that iconicity significantly predicted the percentage of children producing a sign, independent of familiarity or phonological complexity. Our results are consistent with previous findings on sign language acquisition and provide further support for the facilitating effect of iconic form-meaning mappings in sign learning.
  • Sumer, B., Perniss, P., Zwitserlood, I., & Ozyurek, A. (2014). Learning to express "left-right" & "front-behind" in a sign versus spoken language. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1550-1555). Austin, Tx: Cognitive Science Society.

    Abstract

    Developmental studies show that it takes longer for
    children learning spoken languages to acquire viewpointdependent
    spatial relations (e.g., left-right, front-behind),
    compared to ones that are not viewpoint-dependent (e.g.,
    in, on, under). The current study investigates how
    children learn to express viewpoint-dependent relations
    in a sign language where depicted spatial relations can be
    communicated in an analogue manner in the space in
    front of the body or by using body-anchored signs (e.g.,
    tapping the right and left hand/arm to mean left and
    right). Our results indicate that the visual-spatial
    modality might have a facilitating effect on learning to
    express these spatial relations (especially in encoding of
    left-right) in a sign language (i.e., Turkish Sign
    Language) compared to a spoken language (i.e.,
    Turkish).
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1998). Understanding ambiguous words in sentence contexts: Electrophysiological evidence for delayed contextual selection in Broca's aphasia. Neuropsychologia, 36(8), 737-761. doi:10.1016/S0028-3932(97)00174-7.

    Abstract

    This study investigates whether spoken sentence comprehension deficits in Broca's aphasics results from their inability to access the subordinate meaning of ambiguous words (e.g. bank), or alternatively, from a delay in their selection of the contextually appropriate meaning. Twelve Broca's aphasics and twelve elderly controls were presented with lexical ambiguities in three context conditions, each followed by the same target words. In the concordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was related to the target. In the discordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was incompatible with the target.In the unrelated condition, the sentence-final word was unambiguous and unrelated to the target. The task of the subjects was to listen attentively to the stimuli The activational status of the ambiguous sentence-final words was inferred from the amplitude of the N399 to the targets at two inter-stimulus intervals (ISIs) (100 ms and 1250 ms). At the short ISI, the Broca's aphasics showed clear evidence of activation of the subordinate meaning. In contrast to elderly controls, however, the Broca's aphasics were not successful at selecting the appropriate meaning of the ambiguity in the short ISI version of the experiment. But at the long ISI, in accordance with the performance of the elderly controls, the patients were able to successfully complete the contextual selection process. These results indicate that Broca's aphasics are delayed in the process of contextual selection. It is argued that this finding of delayed selection is compatible with the idea that comprehension deficits in Broca's aphasia result from a delay in the process of integrating lexical information.
  • De Swart, P., & Van Bergen, G. (2019). How animacy and verbal information influence V2 sentence processing: Evidence from eye movements. Open Linguistics, 5(1), 630-649. doi:10.1515/opli-2019-0035.

    Abstract

    There exists a clear association between animacy and the grammatical function of transitive subject. The grammar of some languages require the transitive subject to be high in animacy, or at least higher than the object. A similar animacy preference has been observed in processing studies in languages without such a categorical animacy effect. This animacy preference has been mainly established in structures in which either one or both arguments are provided before the verb. Our goal was to establish (i) whether this preference can already be observed before any argument is provided, and (ii) whether this preference is mediated by verbal information. To this end we exploited the V2 property of Dutch which allows the verb to precede its arguments. Using a visual-world eye-tracking paradigm we presented participants with V2 structures with either an auxiliary (e.g. Gisteren heeft X … ‘Yesterday, X has …’) or a lexical main verb (e.g. Gisteren motiveerde X … ‘Yesterday, X motivated …’) and we measured looks to the animate referent. The results indicate that the animacy preference can already be observed before arguments are presented and that the selectional restrictions of the verb mediate this bias, but do not override it completely.
  • Swift, M. (1998). [Book review of LOUIS-JACQUES DORAIS, La parole inuit: Langue, culture et société dans l'Arctique nord-américain]. Language in Society, 27, 273-276. doi:10.1017/S0047404598282042.

    Abstract

    This volume on Inuit speech follows the evolution of a native language of the North American Arctic, from its historical roots to its present-day linguistic structure and patterns of use from Alaska to Greenland. Drawing on a wide range of research from the fields of linguistics, anthropology, and sociology, Dorais integrates these diverse perspectives in a comprehensive view of native language development, maintenance, and use under conditions of marginalization due to social transition.
  • Tachmazidou, I., Süveges, D., Min, J. L., Ritchie, G. R. S., Steinberg, J., Walter, K., Iotchkova, V., Schwartzentruber, J., Huang, J., Memari, Y., McCarthy, S., Crawford, A. A., Bombieri, C., Cocca, M., Farmaki, A.-E., Gaunt, T. R., Jousilahti, P., Kooijman, M. N., Lehne, B., Malerba, G. and 83 moreTachmazidou, I., Süveges, D., Min, J. L., Ritchie, G. R. S., Steinberg, J., Walter, K., Iotchkova, V., Schwartzentruber, J., Huang, J., Memari, Y., McCarthy, S., Crawford, A. A., Bombieri, C., Cocca, M., Farmaki, A.-E., Gaunt, T. R., Jousilahti, P., Kooijman, M. N., Lehne, B., Malerba, G., Männistö, S., Matchan, A., Medina-Gomez, C., Metrustry, S. J., Nag, A., Ntalla, I., Paternoster, L., Rayner, N. W., Sala, C., Scott, W. R., Shihab, H. A., Southam, L., St Pourcain, B., Traglia, M., Trajanoska, K., Zaza, G., Zhang, W., Artigas, M. S., Bansal, N., Benn, M., Chen, Z., Danecek, P., Lin, W.-Y., Locke, A., Luan, J., Manning, A. K., Mulas, A., Sidore, C., Tybjaerg-Hansen, A., Varbo, A., Zoledziewska, M., Finan, C., Hatzikotoulas, K., Hendricks, A. E., Kemp, J. P., Moayyeri, A., Panoutsopoulou, K., Szpak, M., Wilson, S. G., Boehnke, M., Cucca, F., Di Angelantonio, E., Langenberg, C., Lindgren, C., McCarthy, M. I., Morris, A. P., Nordestgaard, B. G., Scott, R. A., Tobin, M. D., Wareham, N. J., Burton, P., Chambers, J. C., Smith, G. D., Dedoussis, G., Felix, J. F., Franco, O. H., Gambaro, G., Gasparini, P., Hammond, C. J., Hofman, A., Jaddoe, V. W. V., Kleber, M., Kooner, J. S., Perola, M., Relton, C., Ring, S. M., Rivadeneira, F., Salomaa, V., Spector, T. D., Stegle, O., Toniolo, D., Uitterlinden, A. G., Barroso, I., Greenwood, C. M. T., Perry, J. R. B., Walker, B. R., Butterworth, A. S., Xue, Y., Durbin, R., Small, K. S., Soranzo, N., Timpson, N. J., & Zeggini, E. (2017). Whole-Genome Sequencing coupled to imputation discovers genetic signals for anthropometric traits. The American Journal of Human Genetics, 100(6), 865-884. doi:10.1016/j.ajhg.2017.04.014.

    Abstract

    Deep sequence-based imputation can enhance the discovery power of genome-wide association studies by assessing previously unexplored variation across the common- and low-frequency spectra. We applied a hybrid whole-genome sequencing (WGS) and deep imputation approach to examine the broader allelic architecture of 12 anthropometric traits associated with height, body mass, and fat distribution in up to 267,616 individuals. We report 106 genome-wide significant signals that have not been previously identified, including 9 low-frequency variants pointing to functional candidates. Of the 106 signals, 6 are in genomic regions that have not been implicated with related traits before, 28 are independent signals at previously reported regions, and 72 represent previously reported signals for a different anthropometric trait. 71% of signals reside within genes and fine mapping resolves 23 signals to one or two likely causal variants. We confirm genetic overlap between human monogenic and polygenic anthropometric traits and find signal enrichment in cis expression QTLs in relevant tissues. Our results highlight the potential of WGS strategies to enhance biologically relevant discoveries across the frequency spectrum.
  • Tagliapietra, L., & McQueen, J. M. (2010). What and where in speech recognition: Geminates and singletons in spoken Italian. Journal of Memory and Language, 63, 306-323. doi:10.1016/j.jml.2010.05.001.

    Abstract

    Four cross-modal repetition priming experiments examined whether consonant duration in Italian provides listeners with information not only for segmental identification ("what" information: whether the consonant is a geminate or a singleton) but also for lexical segmentation (“where” information: whether the consonant is in word-initial or word-medial position). Italian participants made visual lexical decisions to words containing geminates or singletons, preceded by spoken primes (whole words or fragments) containing either geminates or singletons. There were effects of segmental identity (geminates primed geminate recognition; singletons primed singleton recognition), and effects of consonant position (regression analyses revealed graded effects of geminate duration only for geminates which can vary in position, and mixed-effect modeling revealed a positional effect for singletons only in low-frequency words). Durational information appeared to be more important for segmental identification than for lexical segmentation. These findings nevertheless indicate that the same kind of information can serve both "what" and "where" functions in speech comprehension, and that the perceptual processes underlying those functions are interdependent.
  • Takashima, A., Wagensveld, B., Van Turennout, M., Zwitserlood, P., Hagoort, P., & Verhoeven, L. (2014). Training-induced neural plasticity in visual-word decoding and the role of syllables. Neuropsychologia, 61, 299-314. doi:10.1016/j.neuropsychologia.2014.06.017.

    Abstract

    To investigate the neural underpinnings of word decoding, and how it changes as a function of repeated exposure, we trained Dutch participants repeatedly over the course of a month of training to articulate a set of novel disyllabic input strings written in Greek script to avoid the use of familiar orthographic representations. The syllables in the input were phonotactically legal combinations but non-existent in the Dutch language, allowing us to assess their role in novel word decoding. Not only trained disyllabic pseudowords were tested but also pseudowords with recombined patterns of syllables to uncover the emergence of syllabic representations. We showed that with extensive training, articulation became faster and more accurate for the trained pseudowords. On the neural level, the initial stage of decoding was reflected by increased activity in visual attention areas of occipito-temporal and occipito-parietal cortices, and in motor coordination areas of the precentral gyrus and the inferior frontal gyrus. After one month of training, memory representations for holistic information (whole word unit) were established in areas encompassing the angular gyrus, the precuneus and the middle temporal gyrus. Syllabic representations also emerged through repeated training of disyllabic pseudowords, such that reading recombined syllables of the trained pseudowords showed similar brain activation to trained pseudowords and were articulated faster than novel combinations of letter strings used in the trained pseudowords.
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2017). Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words. Brain and Language, 167, 44-60. doi:10.1016/j.bandl.2016.05.009.

    Abstract

    When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems
  • Takashima, A., Bakker-Marshall, I., Van Hell, J. G., McQueen, J. M., & Janzen, G. (2019). Neural correlates of word learning in children. Developmental Cognitive Neuroscience, 37: 100647. doi:10.1016/j.dcn.2019.100649.

    Abstract

    Memory representations of words are thought to undergo changes with consolidation: Episodic memories of novel words are transformed into lexical representations that interact with other words in the mental dictionary. Behavioral studies have shown that this lexical integration process is enhanced when there is more time for consolidation. Neuroimaging studies have further revealed that novel word representations are initially represented in a hippocampally-centered system, whereas left posterior middle temporal cortex activation increases with lexicalization. In this study, we measured behavioral and brain responses to newly-learned words in children. Two groups of Dutch children, aged between 8-10 and 14-16 years, were trained on 30 novel Japanese words depicting novel concepts. Children were tested on word-forms, word-meanings, and the novel words’ influence on existing word processing immediately after training, and again after a week. In line with the adult findings, hippocampal involvement decreased with time. Lexical integration, however, was not observed immediately or after a week, neither behaviorally nor neurally. It appears that time alone is not always sufficient for lexical integration to occur. We suggest that other factors (e.g., the novelty of the concepts and familiarity with the language the words are derived from) might also influence the integration process.

    Additional information

    Supplementary data
  • Takashima, A., & Verhoeven, L. (2019). Radical repetition effects in beginning learners of Chinese as a foreign language reading. Journal of Neurolinguistics, 50, 71-81. doi:10.1016/j.jneuroling.2018.03.001.

    Abstract

    The aim of the present study was to examine whether repetition of radicals during training of Chinese characters leads to better word acquisition performance in beginning learners of Chinese as a foreign language. Thirty Dutch university students were trained on 36 Chinese one-character words for their pronunciations and meanings. They were also exposed to the specifics of the radicals, that is, for phonetic radicals, the associated pronunciation was explained, and for semantic radicals the associated categorical meanings were explained. Results showed that repeated exposure to phonetic and semantic radicals through character pronunciation and meaning trainings indeed induced better understanding of those radicals that were shared among different characters. Furthermore, characters in the training set that shared phonetic radicals were pronounced better than those that did not. Repetition of semantic radicals across different characters, however, hindered the learning of exact meanings. Students generally confused the meanings of other characters that shared the semantic radical. The study shows that in the initial stage of learning, overlapping information of the shared radicals are effectively learned. Acquisition of the specifics of individual characters, however, requires more training.

    Additional information

    Supplementary data
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2014). Richness of information about novel words influences how episodic and semantic memory networks interact during lexicalization. NeuroImage, 84, 265-278. doi:10.1016/j.neuroimage.2013.08.023.

    Abstract

    The complementary learning systems account of declarative memory suggests two distinct memory networks, a fast-mapping, episodic system involving the hippocampus, and a slower semantic memory system distributed across the neocortex in which new information is gradually integrated with existing representations. In this study, we investigated the extent to which these two networks are involved in the integration of novel words into the lexicon after extensive learning, and how the involvement of these networks changes after 24 hours. In particular, we explored whether having richer information at encoding influences the lexicalization trajectory. We trained participants with two sets of novel words, one where exposure was only to the words’ phonological forms (the form-only condition), and one where pictures of unfamiliar objects were associated with the words’ phonological forms (the picture-associated condition). A behavioral measure of lexical competition (indexing lexicalization) indicated stronger competition effects for the form-only words. Imaging (fMRI) results revealed greater involvement of phonological lexical processing areas immediately after training in the form-only condition, suggesting tight connections were formed between novel words and existing lexical entries already at encoding. Retrieval of picture-associated novel words involved the episodic/hippocampal memory system more extensively. Although lexicalization was weaker in the picture-associated condition, overall memory strength was greater when tested after a 24 hours’ delay, probably due to the availability of both episodic and lexical memory networks to aid retrieval. It appears that, during lexicalization of a novel word, the relative involvement of different memory networks differs according to the richness of the information about that word available at encoding.
  • Takaso, H., Eisner, F., Wise, R. J. S., & Scott, S. K. (2010). The effect of delayed auditory feedback on activity in the temporal lobe while speaking: A Positron Emission Tomography study. Journal of Speech, Language, and Hearing Research, 53, 226-236. doi:10.1044/1092-4388(2009/09-0009).

    Abstract

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many non-stuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission tomography (PET) was used to image regional cerebral blood flow changes, an index of neural activity, and assessed the influence of increasing amounts of delay. Results: Delayed auditory feedback led to increased activation in the bilateral superior temporal lobes, extending into posterior-medial auditory areas. Similar peaks in the temporal lobe were sensitive to increases in the amount of delay. A single peak in the temporal parietal junction responded to the amount of delay but not to the presence of a delay (relative to no delay). Conclusions: This study permitted distinctions to be made between the neural response to hearing one's voice at a delay, and the neural activity that correlates with this delay. Notably all the peaks showed some influence of the amount of delay. This result confirms a role for the posterior, sensori-motor ‘how’ system in the production of speech under conditions of delayed auditory feedback.
  • Tamaoka, K., Makioka, S., Sanders, S., & Verdonschot, R. G. (2017). www.kanjidatabase.com: A new interactive online database for psychological and linguistic research on Japanese kanji and their compound words. Psychological Research, 81(3), 696-708. doi:10.1007/s00426-016-0764-3.

    Abstract

    Most experimental research making use of the Japanese language has involved the 1945 officially standardized kanji (Japanese logographic characters) in the Joyo kanji list (originally announced by the Japanese government in 1981). However, this list was extensively modified in 2010: five kanji were removed and 196 kanji were added; the latest revision of the list now has a total of 2136 kanji. Using an up-to-date corpus consisting of 11 years' worth of articles printed in the Mainichi Newspaper (2000-2010), we have constructed two novel databases that can be used in psychological research using the Japanese language: (1) a database containing a wide variety of properties on the latest 2136 Joyo kanji, and (2) a novel database containing 27,950 two-kanji compound words (or jukugo). Based on these two databases, we have created an interactive website (www.kanjidatabase.com) to retrieve and store linguistic information to be used in psychological and linguistic experiments. The present paper reports the most important characteristics for the new databases, as well as their value for experimental psychological and linguistic research.
  • Tamaoka, K., Saito, N., Kiyama, S., Timmer, K., & Verdonschot, R. G. (2014). Is pitch accent necessary for comprehension by native Japanese speakers? - An ERP investigation. Journal of Neurolinguistics, 27(1), 31-40. doi:10.1016/j.jneuroling.2013.08.001.

    Abstract

    Not unlike the tonal system in Chinese, Japanese habitually attaches pitch accents to the production of words. However, in contrast to Chinese, few homophonic word-pairs are really distinguished by pitch accents (Shibata & Shibata, 1990). This predicts that pitch accent plays a small role in lexical selection for Japanese language comprehension. The present study investigated whether native Japanese speakers necessarily use pitch accent in the processing of accent-contrasted homophonic pairs (e.g., ame [LH] for 'candy' and ame [HI] for 'rain') measuring electroencephalographic (EEG) potentials. Electrophysiological evidence (i.e., N400) was obtained when a word was semantically incorrect for a given context but not for incorrectly accented homophones. This suggests that pitch accent indeed plays a minor role when understanding Japanese. (C) 2013 Elsevier Ltd. All rights reserved.
  • Tan, Y., Martin, R. C., & Van Dyke, J. A. (2017). Semantic and syntactic interference in sentence comprehension: A comparison of working memory models. Frontiers in Psychology, 8: 198. doi:10.3389/fpsyg.2017.00198.

    Abstract

    This study investigated the nature of the underlying working memory system supporting sentence processing through examining individual differences in sensitivity to retrieval interference effects during sentence comprehension. Interference effects occur when readers incorrectly retrieve sentence constituents which are similar to those required during integrative processes. We examined interference arising from a partial match between distracting constituents and syntactic and semantic cues, and related these interference effects to performance on working memory, short-term memory (STM), vocabulary, and executive function tasks. For online sentence comprehension, as measured by self-paced reading, the magnitude of individuals' syntactic interference effects was predicted by general WM capacity and the relation remained significant when partialling out vocabulary, indicating that the effects were not due to verbal knowledge. For offline sentence comprehension, as measured by responses to comprehension questions, both general WM capacity and vocabulary knowledge interacted with semantic interference for comprehension accuracy, suggesting that both general WM capacity and the quality of semantic representations played a role in determining how well interference was resolved offline. For comprehension question reaction times, a measure of semantic STM capacity interacted with semantic but not syntactic interference. However, a measure of phonological capacity (digit span) and a general measure of resistance to response interference (Stroop effect) did not predict individuals' interference resolution abilities in either online or offline sentence comprehension. The results are discussed in relation to the multiple capacities account of working memory (e.g., Martin and Romani, 1994; Martin and He, 2004), and the cue-based retrieval parsing approach (e.g., Lewis et al., 2006; Van Dyke et al., 2014). While neither approach was fully supported, a possible means of reconciling the two approaches and directions for future research are proposed.
  • Tanner, J. E., & Perlman, M. (2017). Moving beyond ‘meaning’: Gorillas combine gestures into sequences for creative display. Language & Communication, 54, 56-72. doi:10.1016/j.langcom.2016.10.006.

    Abstract

    The great apes produce gestures intentionally and flexibly, and sometimes they combine their gestures into sequences, producing two or more gestures in close succession. We reevaluate previous findings related to ape gesture sequences and present qualitative analysis of videotaped gorilla interaction. We present evidence that gorillas produce at least two different kinds of gesture sequences: some sequences are largely composed of gestures that depict motion in an iconic manner, typically requesting particular action by the partner; others are multimodal and contain gestures – often percussive in nature – that are performed in situations of play or display. Display sequences seem to primarily exhibit the performer’s emotional state and physical fitness but have no immediate functional goal. Analysis reveals that some gorilla play and display sequences can be 1) organized hierarchically into longer bouts and repetitions; 2) innovative and individualized, incorporating objects and environmental features; and 3) highly interactive between partners. It is illuminating to look beyond ‘meaning’ in the conventional linguistic sense and look at the possibility that characteristics of music and dance, as well as those of language, are included in the gesturing of apes.
  • Tanner, D., Nicol, J., & Brehm, L. (2014). The time-course of feature interference in agreement comprehension: Multiple mechanisms and asymmetrical attraction. Journal of Memory and Language, 76, 195-215. doi:10.1016/j.jml.2014.07.003.

    Abstract

    Attraction interference in language comprehension and production may be as a result of common or different processes. In the present paper, we investigate attraction interference during language comprehension, focusing on the contexts in which interference arises and the time-course of these effects. Using evidence from event-related brain potentials (ERPs) and sentence judgment times, we show that agreement attraction in comprehension is best explained as morphosyntactic interference during memory retrieval. This stands in contrast to attraction as a message-level process involving the representation of the subject NP's number features, which is a strong contributor to attraction in production. We thus argue that the cognitive antecedents of agreement attraction in comprehension are non-identical with those of attraction in production, and moreover, that attraction in comprehension is primarily a consequence of similarity-based interference in cue-based memory retrieval processes. We suggest that mechanisms responsible for attraction during language comprehension are a subset of those involved in language production.
  • Telling, A. L., Kumar, S., Meyer, A. S., & Humphreys, G. W. (2010). Electrophysiological evidence of semantic interference in visual search. Journal of Cognitive Neuroscience, 22(10), 2212-2225. doi:10.1162/jocn.2009.21348.

    Abstract

    Visual evoked responses were monitored while participants searched for a target (e.g., bird) in a four-object display that could include a semantically related distractor (e.g., fish). The occurrence of both the target and the semantically related distractor modulated the N2pc response to the search display: The N2pc amplitude was more pronounced when the target and the distractor appeared in the same visual field, and it was less pronounced when the target and the distractor were in opposite fields, relative to when the distractor was absent. Earlier components (P1, N1) did not show any differences in activity across the different distractor conditions. The data suggest that semantic distractors influence early stages of selecting stimuli in multielement displays.
  • Telling, A. L., Meyer, A. S., & Humphreys, G. W. (2010). Distracted by relatives: Effects of frontal lobe damage on semantic distraction. Brain and Cognition, 73, 203-214. doi:10.1016/j.bandc.2010.05.004.

    Abstract

    When young adults carry out visual search, distractors that are semantically related, rather than unrelated, to targets can disrupt target selection (see [Belke et al., 2008] and [Moores et al., 2003]). This effect is apparent on the first eye movements in search, suggesting that attention is sometimes captured by related distractors. Here we assessed effects of semantically related distractors on search in patients with frontal-lobe lesions and compared them to the effects in age-matched controls. Compared with the controls, the patients were less likely to make a first saccade to the target and they were more likely to saccade to distractors (whether related or unrelated to the target). This suggests a deficit in a first stage of selecting a potential target for attention. In addition, the patients made more errors by responding to semantically related distractors on target-absent trials. This indicates a problem at a second stage of target verification, after items have been attended. The data suggest that frontal lobe damage disrupts both the ability to use peripheral information to guide attention, and the ability to keep separate the target of search from the related items, on occasions when related items achieve selection.
  • Ten Bosch, L., Oostdijk, N., & De Ruiter, J. P. (2004). Turn-taking in social talk dialogues: Temporal, formal and functional aspects. In 9th International Conference Speech and Computer (SPECOM'2004) (pp. 454-461).

    Abstract

    This paper presents a quantitative analysis of the
    turn-taking mechanism evidenced in 93 telephone
    dialogues that were taken from the 9-million-word
    Spoken Dutch Corpus. While the first part of the paper
    focuses on the temporal phenomena of turn taking, such
    as durations of pauses and overlaps of turns in the
    dialogues, the second part explores the discoursefunctional
    aspects of utterances in a subset of 8
    dialogues that were annotated especially for this
    purpose. The results show that speakers adapt their turntaking
    behaviour to the interlocutor’s behaviour.
    Furthermore, the results indicate that male-male dialogs
    show a higher proportion of overlapping turns than
    female-female dialogues.
  • Ten Bosch, L., Mulder, K., & Boves, L. (2019). Phase synchronization between EEG signals as a function of differences between stimuli characteristics. In Proceedings of Interspeech 2019 (pp. 1213-1217). doi:10.21437/Interspeech.2019-2443.

    Abstract

    The neural processing of speech leads to specific patterns in the brain which can be measured as, e.g., EEG signals. When properly aligned with the speech input and averaged over many tokens, the Event Related Potential (ERP) signal is able to differentiate specific contrasts between speech signals. Well-known effects relate to the difference between expected and unexpected words, in particular in the N400, while effects in N100 and P200 are related to attention and acoustic onset effects. Most EEG studies deal with the amplitude of EEG signals over time, sidestepping the effect of phase and phase synchronization. This paper investigates the relation between phase in the EEG signals measured in an auditory lexical decision task by Dutch participants listening to full and reduced English word forms. We show that phase synchronization takes place across stimulus conditions, and that the so-called circular variance is narrowly related to the type of contrast between stimuli.
  • Ten Bosch, L., Ernestus, M., & Boves, L. (2014). Comparing reaction time sequences from human participants and computational models. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 462-466).

    Abstract

    This paper addresses the question how to compare reaction times computed by a computational model of speech comprehension with observed reaction times by participants. The question is based on the observation that reaction time sequences substantially differ per participant, which raises the issue of how exactly the model is to be assessed. Part of the variation in reaction time sequences is caused by the so-called local speed: the current reaction time correlates to some extent with a number of previous reaction times, due to slowly varying variations in attention, fatigue etc. This paper proposes a method, based on time series analysis, to filter the observed reaction times in order to separate the local speed effects. Results show that after such filtering the between-participant correlations increase as well as the average correlation between participant and model increases. The presented technique provides insights into relevant aspects that are to be taken into account when comparing reaction time sequences
  • Ten Bosch, L., Oostdijk, N., & De Ruiter, J. P. (2004). Durational aspects of turn-taking in spontaneous face-to-face and telephone dialogues. In P. Sojka, I. Kopecek, & K. Pala (Eds.), Text, Speech and Dialogue: Proceedings of the 7th International Conference TSD 2004 (pp. 563-570). Heidelberg: Springer.

    Abstract

    On the basis of two-speaker spontaneous conversations, it is shown that the distributions of both pauses and speech-overlaps of telephone and faceto-face dialogues have different statistical properties. Pauses in a face-to-face
    dialogue last up to 4 times longer than pauses in telephone conversations in functionally comparable conditions. There is a high correlation (0.88 or larger) between the average pause duration for the two speakers across face-to-face
    dialogues and telephone dialogues. The data provided form a first quantitative analysis of the complex turn-taking mechanism evidenced in the dialogues available in the 9-million-word Spoken Dutch Corpus.
  • Ten Oever, S., Schroeder, C. E., Poeppel, D., Van Atteveldt, N., Mehta, A. D., Megevand, P., Groppe, D. M., & Zion-Golumbic, E. (2017). Low-frequency cortical oscillations entrain to subthreshold rhythmic auditory stimuli. The Journal of Neuroscience, 37(19), 4903-4912. doi:10.1523/JNEUROSCI.3658-16.2017.

    Abstract

    Many environmental stimuli contain temporal regularities, a feature that can help predict forthcoming input. Phase locking (entrainment) of ongoing low-frequency neuronal oscillations to rhythmic stimuli is proposed as a potential mechanism for enhancing neuronal responses and perceptual sensitivity, by aligning high-excitability phases to events within a stimulus stream. Previous experiments show that rhythmic structure has a behavioral benefit even when the rhythm itself is below perceptual detection thresholds (ten Oever et al., 2014). It is not known whether this "inaudible" rhythmic sound stream also induces entrainment. Here we tested this hypothesis using magnetoencephalography and electrocorticography in humans to record changes in neuronal activity as subthreshold rhythmic stimuli gradually became audible. We found that significant phase locking to the rhythmic sounds preceded participants' detection of them. Moreover, no significant auditory-evoked responses accompanied this prethreshold entrainment. These auditory-evoked responses, distinguished by robust, broad-band increases in intertrial coherence, only appeared after sounds were reported as audible. Taken together with the reduced perceptual thresholds observed for rhythmic sequences, these findings support the proposition that entrainment of low-frequency oscillations serves a mechanistic role in enhancing perceptual sensitivity for temporally predictive sounds. This framework has broad implications for understanding the neural mechanisms involved in generating temporal predictions and their relevance for perception, attention, and awareness.
  • Ten Oever, S., & Sack, A. T. (2019). Interactions between rhythmic and feature predictions to create parallel time-content associations. Frontiers in Neuroscience, 13: 791. doi:10.3389/fnins.2019.00791.

    Abstract

    The brain is inherently proactive, constantly predicting the when (moment) and what (content) of future input in order to optimize information processing. Previous research on such predictions has mainly studied the “when” or “what” domain separately, missing to investigate the potential integration of both types of predictive information. In the absence of such integration, temporal cues are assumed to enhance any upcoming content at the predicted moment in time (general temporal predictor). However, if the when and what prediction domain were integrated, a much more flexible neural mechanism may be proposed in which temporal-feature interactions would allow for the creation of multiple concurrent time-content predictions (parallel time-content predictor). Here, we used a temporal association paradigm in two experiments in which sound identity was systematically paired with a specific time delay after the offset of a rhythmic visual input stream. In Experiment 1, we revealed that participants associated the time delay of presentation with the identity of the sound. In Experiment 2, we unexpectedly found that the strength of this temporal association was negatively related to the EEG steady-state evoked responses (SSVEP) in preceding trials, showing that after high neuronal responses participants responded inconsistent with the time-content associations, similar to adaptation mechanisms. In this experiment, time-content associations were only present for low SSVEP responses in previous trials. These results tentatively show that it is possible to represent multiple time-content paired predictions in parallel, however, future research is needed to investigate this interaction further.
  • Ten Oever, S., Schroeder, C. E., Poeppel, D., Van Atteveldt, N., & Zion-Golumbic, E. (2014). Rhythmicity and cross-modal temporal cues facilitate detection. Neuropsychologia, 63, 43-50. doi:10.1016/j.neuropsychologia.2014.08.008.

    Abstract

    Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we investigated the influence of two types of predictive temporal information on the perception of near-threshold auditory stimuli: 1) intrinsic temporal rhythmicity within an auditory stimulus stream and 2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality should decrease the threshold at which sounds are detected, beyond the advantage provided by each information source alone. Two experiments were conducted in which participants had to detect tones in noise. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual signal in half of the trials. We show that detection intensities are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, independent from response bias, and that this effect is even greater for rhythmic audiovisual presentation. These results suggest that both types of temporal information are used to optimally process sounds that occur at expected points in time (resulting in enhanced detection), and that multiple temporal cues are combined to improve temporal estimates. Our findings underscore the flexibility and proactivity of the perceptual system which uses within- and across-modality temporal cues to anticipate upcoming events and process them optimally. (C) 2014 Elsevier Ltd. All rights reserved.
  • Ten Bosch, L., Boves, L., & Ernestus, M. (2017). The recognition of compounds: A computational account. In Proceedings of Interspeech 2017 (pp. 1158-1162). doi:10.21437/Interspeech.2017-1048.

    Abstract

    This paper investigates the processes in comprehending spoken noun-noun compounds, using data from the BALDEY database. BALDEY contains lexicality judgments and reaction times (RTs) for Dutch stimuli for which also linguistic information is included. Two different approaches are combined. The first is based on regression by Dynamic Survival Analysis, which models decisions and RTs as a consequence of the fact that a cumulative density function exceeds some threshold. The parameters of that function are estimated from the observed RT data. The second approach is based on DIANA, a process-oriented computational model of human word comprehension, which simulates the comprehension process with the acoustic stimulus as input. DIANA gives the identity and the number of the word candidates that are activated at each 10 ms time step.

    Both approaches show how the processes involved in comprehending compounds change during a stimulus. Survival Analysis shows that the impact of word duration varies during the course of a stimulus. The density of word and non-word hypotheses in DIANA shows a corresponding pattern with different regimes. We show how the approaches complement each other, and discuss additional ways in which data and process models can be combined.
  • Ter Bekke, M., Ozyurek, A., & Ünal, E. (2019). Speaking but not gesturing predicts motion event memory within and across languages. In A. Goel, C. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2940-2946). Montreal, QB: Cognitive Science Society.

    Abstract

    In everyday life, people see, describe and remember motion events. We tested whether the type of motion event information (path or manner) encoded in speech and gesture predicts which information is remembered and if this varies across speakers of typologically different languages. We focus on intransitive motion events (e.g., a woman running to a tree) that are described differently in speech and co-speech gesture across languages, based on how these languages typologically encode manner and path information (Kita & Özyürek, 2003; Talmy, 1985). Speakers of Dutch (n = 19) and Turkish (n = 22) watched and described motion events. With a surprise (i.e. unexpected) recognition memory task, memory for manner and path components of these events was measured. Neither Dutch nor Turkish speakers’ memory for manner went above chance levels. However, we found a positive relation between path speech and path change detection: participants who described the path during encoding were more accurate at detecting changes to the path of an event during the memory task. In addition, the relation between path speech and path memory changed with native language: for Dutch speakers encoding path in speech was related to improved path memory, but for Turkish speakers no such relation existed. For both languages, co-speech gesture did not predict memory speakers. We discuss the implications of these findings for our understanding of the relations between speech, gesture, type of encoding in language and memory.
  • Terrill, A. (2010). [Review of Bowern, Claire. 2008. Linguistic fieldwork: a practical guide]. Language, 86(2), 435-438. doi:10.1353/lan.0.0214.
  • Terrill, A. (2010). [Review of R. A. Blust The Austronesian languages. 2009. Canberra: Pacific Linguistics]. Oceanic Linguistics, 49(1), 313-316. doi:10.1353/ol.0.0061.

    Abstract

    In lieu of an abstract, here is a preview of the article. This is a marvelous, dense, scholarly, detailed, exhaustive, and ambitious book. In 800-odd pages, it seeks to describe the whole huge majesty of the Austronesian language family, as well as the history of the family, the history of ideas relating to the family, and all the ramifications of such topics. Blust doesn't just describe, he goes into exhaustive detail, and not just over a few topics, but over every topic he covers. This is an incredible achievement, representing a lifetime of experience. This is not a book to be read from cover to cover—it is a book to be dipped into, pondered, and considered, slowly and carefully. The book is not organized by area or subfamily; readers interested in one area or family can consult the authoritative work on Western Austronesian (Adelaar and Himmelmann 2005), or, for the Oceanic languages, Lynch, Ross, and Crowley (2002). Rather, Blust's stated aim "is to provide a comprehensive overview of Austronesian languages which integrates areal interests into a broader perspective" (xxiii). Thus the aim is more ambitious than just discussion of areal features or historical connections, but seeks to describe the interconnections between these. The Austronesian language family is very large, second only in size to Niger-Congo (xxii). It encompasses over 1,000 members, and its protolanguage has been dated back to 6,000 years ago (xxii). The exact groupings of some Austronesian languages are still under discussion, but broadly, the family is divided into ten major subgroups, nine of which are spoken in Taiwan, the homeland of the Austronesian family. The tenth, Malayo-Polynesian, is itself divided into two major groups: Western Malayo-Polynesian, which is spread throughout the Philippines, Indonesia, and mainland Southeast Asia to Madagascar; and Central-Eastern Malayo-Polynesian, spoken from eastern Indonesia throughout the Pacific. The geographic, cultural, and linguistic diversity of the family
  • Terwisscha van Scheltinga, A. F., Bakker, S. C., Van Haren, N. E., Boos, H. B., Schnack, H. G., Cahn, W., Hoogman, M., Zwiers, M. P., Fernandez, G., Franke, B., Hulshoff Pol, H. E., & Kahn, R. S. (2014). Association study of fibroblast growth factor genes and brain volumes in schizophrenic patients and healthy controls. Psychiatric Genetics, 24, 283-284. doi:10.1097/YPG.0000000000000057.
  • Theakston, A., Coates, A., & Holler, J. (2014). Handling agents and patients: Representational cospeech gestures help children comprehend complex syntactic constructions. Developmental Psychology, 50(7), 1973-1984. doi:10.1037/a0036694.

    Abstract

    Gesture is an important precursor of children’s early language development, for example, in the transition to multiword speech and as a predictor of later language abilities. However, it is unclear whether gestural input can influence children’s comprehension of complex grammatical constructions. In Study 1, 3- (M = 3 years 5 months) and 4-year-old (M = 4 years 6 months) children witnessed 2-participant actions described using the infrequent object-cleft-construction (OCC; It was the dog that the cat chased). Half saw an experimenter accompanying her descriptions with gestures representing the 2 participants and indicating the direction of action; the remaining children did not witness gesture. Children who witnessed gestures showed better comprehension of the OCC than those who did not witness gestures, both in and beyond the immediate physical context, but this benefit was restricted to the oldest 4-year-olds. In Study 2, a further group of older 4-year-old children (M = 4 years 7 months) witnessed the same 2-participant actions described by an experimenter and accompanied by gestures, but the gesture represented only the 2 participants and not the direction of the action. Again, a benefit of gesture was observed on subsequent comprehension of the OCC. We interpret these findings as demonstrating that representational cospeech gestures can help children comprehend complex linguistic structures by highlighting the roles played by the participants in the event.

    Files private

    Request files
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2004). Semantic generality, input frequency and the acquisition of syntax. Journal of Child Language, 31(1), 61-99. doi:10.1017/S0305000903005956.

    Abstract

    In many areas of language acquisition, researchers have suggested that semantic generality plays an important role in determining the order of acquisition of particular lexical forms. However, generality is typically confounded with the effects of input frequency and it is therefore unclear to what extent semantic generality or input frequency determines the early acquisition of particular lexical items. The present study evaluates the relative influence of semantic status and properties of the input on the acquisition of verbs and their argument structures in the early speech of 9 English-speaking children from 2;0 to 3;0. The children's early verb utterances are examined with respect to (1) the order of acquisition of particular verbs in three different constructions, (2) the syntactic diversity of use of individual verbs, (3) the relative proportional use of semantically general verbs as a function of total verb use, and (4) their grammatical accuracy. The data suggest that although measures of semantic generality correlate with various measures of early verb use, once the effects of verb use in the input are removed, semantic generality is not a significant predictor of early verb use. The implications of these results for semantic-based theories of verb argument structure acquisition are discussed.
  • Thiebaut de Schotten, M., Friedrich, P., & Forkel, S. J. (2019). One size fits all does not apply to brain lateralisation. Physics of Life Reviews, 30, 30-33. doi:10.1016/j.plrev.2019.07.007.

    Abstract

    Our understanding of the functioning of the brain is primarily based on an average model of the brain's functional organisation, and any deviation from the standard is considered as random noise or a pathological appearance. Studying pathologies has, however, greatly contributed to our understanding of brain functions. For instance, the study of naturally-occurring or surgically-induced brain lesions revealed that language is predominantly lateralised to the left hemisphere while perception/action and emotion are commonly lateralised to the right hemisphere. The lateralisation of function was subsequently replicated by task-related functional neuroimaging in the healthy population. Despite its high significance and reproducibility, this pattern of lateralisation of function is true for most, but not all participants. Bilateral and flipped representations of classically lateralised functions have been reported during development and in the healthy adult population for language, perception/action and emotion. Understanding these different functional representations at an individual level is crucial to improve the sophistication of our models and account for the variance in developmental trajectories, cognitive performance differences and clinical recovery. With the availability of in vivo neuroimaging, it has become feasible to study large numbers of participants and reliably characterise individual differences, also referred to as phenotypes. Yet, we are at the beginning of inter-individual variability modelling, and new theories of brain function will have to account for these differences across participants.
  • Thompson, P. M., Andreassen, O. A., Arias-Vasquez, A., Bearden, C. E., Boedhoe, P. S., Brouwer, R. M., Buckner, R. L., Buitelaar, J. K., Bulaeva, K. B., Cannon, D. M., Cohen, R. A., Conrod, P. J., Dale, A. M., Deary, I. J., Dennis, E. L., De Reus, M. A., Desrivieres, S., Dima, D., Donohoe, G., Fisher, S. E. and 51 moreThompson, P. M., Andreassen, O. A., Arias-Vasquez, A., Bearden, C. E., Boedhoe, P. S., Brouwer, R. M., Buckner, R. L., Buitelaar, J. K., Bulaeva, K. B., Cannon, D. M., Cohen, R. A., Conrod, P. J., Dale, A. M., Deary, I. J., Dennis, E. L., De Reus, M. A., Desrivieres, S., Dima, D., Donohoe, G., Fisher, S. E., Fouche, J.-P., Francks, C., Frangou, S., Franke, B., Ganjgahi, H., Garavan, H., Glahn, D. C., Grabe, H. J., Guadalupe, T., Gutman, B. A., Hashimoto, R., Hibar, D. P., Holland, D., Hoogman, M., Pol, H. E. H., Hosten, N., Jahanshad, N., Kelly, S., Kochunov, P., Kremen, W. S., Lee, P. H., Mackey, S., Martin, N. G., Mazoyer, B., McDonald, C., Medland, S. E., Morey, R. A., Nichols, T. E., Paus, T., Pausova, Z., Schmaal, L., Schumann, G., Shen, L., Sisodiya, S. M., Smit, D. J., Smoller, J. W., Stein, D. J., Stein, J. L., Toro, R., Turner, J. A., Van den Heuvel, M., Van den Heuvel, O. A., Van Erp, T. G., Van Rooij, D., Veltman, D. J., Walter, H., Wang, Y., Wardlaw, J. M., Whelan, C. D., Wright, M. J., & Ye, J. (2017). ENIGMA and the Individual: Predicting Factors that Affect the Brain in 35 Countries Worldwide. NeuroImage, 145, 389-408. doi:10.1016/j.neuroimage.2015.11.057.
  • Thompson, J. R., Minelli, C., Bowden, J., Del Greco, F. M., Gill, D., Jones, E. M., Shapland, C. Y., & Sheehan, N. A. (2017). Mendelian randomization incorporating uncertainty about pleiotropy. Statistics in Medicine, 36(29), 4627-4645. doi:10.1002/sim.7442.

    Abstract

    Mendelian randomization (MR) requires strong assumptions about the genetic instruments, of which the most difficult to justify relate to pleiotropy. In a two-sample MR, different methods of analysis are available if we are able to assume, M1: no pleiotropy (fixed effects meta-analysis), M2: that there may be pleiotropy but that the average pleiotropic effect is zero (random effects meta-analysis), and M3: that the average pleiotropic effect is nonzero (MR-Egger). In the latter 2 cases, we also require that the size of the pleiotropy is independent of the size of the effect on the exposure. Selecting one of these models without good reason would run the risk of misrepresenting the evidence for causality. The most conservative strategy would be to use M3 in all analyses as this makes the weakest assumptions, but such an analysis gives much less precise estimates and so should be avoided whenever stronger assumptions are credible. We consider the situation of a two-sample design when we are unsure which of these 3 pleiotropy models is appropriate. The analysis is placed within a Bayesian framework and Bayesian model averaging is used. We demonstrate that even large samples of the scale used in genome-wide meta-analysis may be insufficient to distinguish the pleiotropy models based on the data alone. Our simulations show that Bayesian model averaging provides a reasonable trade-off between bias and precision. Bayesian model averaging is recommended whenever there is uncertainty about the nature of the pleiotropy

    Additional information

    sim7442-sup-0001-Supplementary.pdf
  • Thompson, P. M., Stein, J. L., Medland, S. E., Hibar, D. P., Vasquez, A. A., Renteria, M. E., Toro, R., Jahanshad, N., Schumann, G., Franke, B., Wright, M. J., Martin, N. G., Agartz, I., Alda, M., Alhusaini, S., Almasy, L., Almeida, J., Alpert, K., Andreasen, N. C., Andreassen, O. A. and 269 moreThompson, P. M., Stein, J. L., Medland, S. E., Hibar, D. P., Vasquez, A. A., Renteria, M. E., Toro, R., Jahanshad, N., Schumann, G., Franke, B., Wright, M. J., Martin, N. G., Agartz, I., Alda, M., Alhusaini, S., Almasy, L., Almeida, J., Alpert, K., Andreasen, N. C., Andreassen, O. A., Apostolova, L. G., Appel, K., Armstrong, N. J., Aribisala, B., Bastin, M. E., Bauer, M., Bearden, C. E., Bergmann, Ø., Binder, E. B., Blangero, J., Bockholt, H. J., Bøen, E., Bois, C., Boomsma, D. I., Booth, T., Bowman, I. J., Bralten, J., Brouwer, R. M., Brunner, H. G., Brohawn, D. G., Buckner, R. L., Buitelaar, J., Bulayeva, K., Bustillo, J. R., Calhoun, V. D., Cannon, D. M., Cantor, R. M., Carless, M. A., Caseras, X., Cavalleri, G. L., Chakravarty, M. M., Chang, K. D., Ching, C. R. K., Christoforou, A., Cichon, S., Clark, V. P., Conrod, P., Coppola, G., Crespo-Facorro, B., Curran, J. E., Czisch, M., Deary, I. J., de Geus, E. J. C., den Braber, A., Delvecchio, G., Depondt, C., de Haan, L., de Zubicaray, G. I., Dima, D., Dimitrova, R., Djurovic, S., Dong, H., Donohoe, G., Duggirala, R., Dyer, T. D., Ehrlich, S., Ekman, C. J., Elvsåshagen, T., Emsell, L., Erk, S., Espeseth, T., Fagerness, J., Fears, S., Fedko, I., Fernández, G., Fisher, S. E., Foroud, T., Fox, P. T., Francks, C., Frangou, S., Frey, E. M., Frodl, T., Frouin, V., Garavan, H., Giddaluru, S., Glahn, D. C., Godlewska, B., Goldstein, R. Z., Gollub, R. L., Grabe, H. J., Grimm, O., Gruber, O., Guadalupe, T., Gur, R. E., Gur, R. C., Göring, H. H. H., Hagenaars, S., Hajek, T., Hall, G. B., Hall, J., Hardy, J., Hartman, C. A., Hass, J., Hatton, S. N., Haukvik, U. K., Hegenscheid, K., Heinz, A., Hickie, I. B., Ho, B.-C., Hoehn, D., Hoekstra, P. J., Hollinshead, M., Holmes, A. J., Homuth, G., Hoogman, M., Hong, L. E., Hosten, N., Hottenga, J.-J., Pol, H. E. H., Hwang, K. S., Jr, C. R. J., Jenkinson, M., Johnston, C., Jönsson, E. G., Kahn, R. S., Kasperaviciute, D., Kelly, S., Kim, S., Kochunov, P., Koenders, L., Krämer, B., Kwok, J. B. J., Lagopoulos, J., Laje, G., Landen, M., Landman, B. A., Lauriello, J., Lawrie, S. M., Lee, P. H., Le Hellard, S., Lemaître, H., Leonardo, C. D., Li, C.-s., Liberg, B., Liewald, D. C., Liu, X., Lopez, L. M., Loth, E., Lourdusamy, A., Luciano, M., Macciardi, F., Machielsen, M. W. J., MacQueen, G. M., Malt, U. F., Mandl, R., Manoach, D. S., Martinot, J.-L., Matarin, M., Mather, K. A., Mattheisen, M., Mattingsdal, M., Meyer-Lindenberg, A., McDonald, C., McIntosh, A. M., McMahon, F. J., McMahon, K. L., Meisenzahl, E., Melle, I., Milaneschi, Y., Mohnke, S., Montgomery, G. W., Morris, D. W., Moses, E. K., Mueller, B. A., Maniega, S. M., Mühleisen, T. W., Müller-Myhsok, B., Mwangi, B., Nauck, M., Nho, K., Nichols, T. E., Nilsson, L.-G., Nugent, A. C., Nyberg, L., Olvera, R. L., Oosterlaan, J., Ophoff, R. A., Pandolfo, M., Papalampropoulou-Tsiridou, M., Papmeyer, M., Paus, T., Pausova, Z., Pearlson, G. D., Penninx, B. W., Peterson, C. P., Pfennig, A., Phillips, M., Pike, G. B., Poline, J.-B., Potkin, S. G., Pütz, B., Ramasamy, A., Rasmussen, J., Rietschel, M., Rijpkema, M., Risacher, S. L., Roffman, J. L., Roiz-Santiañez, R., Romanczuk-Seiferth, N., Rose, E. J., Royle, N. A., Rujescu, D., Ryten, M., Sachdev, P. S., Salami, A., Satterthwaite, T. D., Savitz, J., Saykin, A. J., Scanlon, C., Schmaal, L., Schnack, H. G., Schork, A. J., Schulz, S. C., Schür, R., Seidman, L., Shen, L., Shoemaker, J. M., Simmons, A., Sisodiya, S. M., Smith, C., Smoller, J. W., Soares, J. C., Sponheim, S. R., Sprooten, E., Starr, J. M., Steen, V. M., Strakowski, S., Strike, L., Sussmann, J., Sämann, P. G., Teumer, A., Toga, A. W., Tordesillas-Gutierrez, D., Trabzuni, D., Trost, S., Turner, J., Van den Heuvel, M., van der Wee, N. J., van Eijk, K., van Erp, T. G. M., van Haren, N. E. M., van Ent, D. ‘., van Tol, M.-J., Hernández, M. C. V., Veltman, D. J., Versace, A., Völzke, H., Walker, R., Walter, H., Wang, L., Wardlaw, J. M., Weale, M. E., Weiner, M. W., Wen, W., Westlye, L. T., Whalley, H. C., Whelan, C. D., White, T., Winkler, A. M., Wittfeld, K., Woldehawariat, G., Wolf, C., Zilles, D., Zwiers, M. P., Thalamuthu, A., Schofield, P. R., Freimer, N. B., Lawrence, N. S., & Drevets, W. (2014). The ENIGMA Consortium: Large-scale collaborative analyses of neuroimaging and genetic data. Brain Imaging and Behavior, 8(2), 153-182. doi:10.1007/s11682-013-9269-5.

    Abstract

    The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience, genetics, and medicine, ENIGMA studies have analyzed neuroimaging data from over 12,826 subjects. In addition, data from 12,171 individuals were provided by the CHARGE consortium for replication of findings, in a total of 24,997 subjects. By meta-analyzing results from many sites, ENIGMA has detected factors that affect the brain that no individual site could detect on its own, and that require larger numbers of subjects than any individual neuroimaging study has currently collected. ENIGMA’s first project was a genome-wide association study identifying common variants in the genome associated with hippocampal volume or intracranial volume. Continuing work is exploring genetic associations with subcortical volumes (ENIGMA2) and white matter microstructure (ENIGMA-DTI). Working groups also focus on understanding how schizophrenia, bipolar illness, major depression and attention deficit/hyperactivity disorder (ADHD) affect the brain. We review the current progress of the ENIGMA Consortium, along with challenges and unexpected discoveries made on the way
  • Thorgrimsson, G., Fawcett, C., & Liszkowski, U. (2014). Infants’ expectations about gestures and actions in third-party interactions. Frontiers in Psychology, 5: 321. doi:10.3389/fpsyg.2014.00321.

    Abstract

    We investigated 14-month-old infants’ expectations toward a third party addressee of communicative gestures and an instrumental action. Infants’ eye movements were tracked as they observed a person (the Gesturer) point, direct a palm-up request gesture, or reach toward an object, and another person (the Addressee) respond by grasping it. Infants’ looking patterns indicate that when the Gesturer pointed or used the palm-up request, infants anticipated that the Addressee would give the object to the Gesturer, suggesting that they ascribed a motive of request to the gestures. In contrast, when the Gesturer reached for the object, and in a control condition where no action took place, the infants did not anticipate the Addressee’s response. The results demonstrate that infants’ recognition of communicative gestures extends to others’ interactions, and that infants can anticipate how third-party addressees will respond to others’ gestures.
  • Tilot, A. K., Gaugler, M. K., Yu, Q., Romigh, T., Yu, W., Miller, R. H., Frazier, T. W., & Eng, C. (2014). Germline disruption of Pten localization causes enhanced sex-dependent social motivation and increased glial production. Human Molecular Genetics, 23(12), 3212-3227. doi:10.1093/hmg/ddu031.

    Abstract

    PTEN Hamartoma Tumor Syndrome (PHTS) is an autosomal-dominant genetic condition underlying a subset of autism spectrum disorder (ASD) with macrocephaly. Caused by germline mutations in PTEN, PHTS also causes increased risks of multiple cancers via dysregulation of the PI3K and MAPK signaling pathways. Conditional knockout models have shown that neural Pten regulates social behavior, proliferation and cell size. Although much is known about how the intracellular localization of PTEN regulates signaling in cancer cell lines, we know little of how PTEN localization influences normal brain physiology and behavior. To address this, we generated a germline knock-in mouse model of cytoplasm-predominant Pten and characterized its behavioral and cellular phenotypes. The homozygous Ptenm3m4 mice have decreased total Pten levels including a specific drop in nuclear Pten and exhibit region-specific increases in brain weight. The Ptenm3m4 model displays sex-specific increases in social motivation, poor balance and normal recognition memory—a profile reminiscent of some individuals with high functioning ASD. The cytoplasm-predominant protein caused cellular hypertrophy limited to the soma and led to increased NG2 cell proliferation and accumulation of glia. The animals also exhibit significant astrogliosis and microglial activation, indicating a neuroinflammatory phenotype. At the signaling level, Ptenm3m4 mice show brain region-specific differences in Akt activation. These results demonstrate that differing alterations to the same autism-linked gene can cause distinct behavioral profiles. The Ptenm3m4 model is the first murine model of inappropriately elevated social motivation in the context of normal cognition and may expand the range of autism-related behaviors replicated in animal models.
  • Tilot, A. K., Vino, A., Kucera, K. S., Carmichael, D. A., Van den Heuvel, L., Den Hoed, J., Sidoroff-Dorso, A. V., Campbell, A., Porteous, D. J., St Pourcain, B., Van Leeuwen, T. M., Ward, J., Rouw, R., Simner, J., & Fisher, S. E. (2019). Investigating genetic links between grapheme-colour synaesthesia and neuropsychiatric traits. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190026. doi:10.1098/rstb.2019.0026.

    Abstract

    Synaesthesia is a neurological phenomenon affecting perception, where triggering stimuli (e.g. letters and numbers) elicit unusual secondary sensory experiences (e.g. colours). Family-based studies point to a role for genetic factors in the development of this trait. However, the contributions of common genomic variation to synaesthesia have not yet been investigated. Here, we present the SynGenes cohort, the largest genotyped collection of unrelated people with grapheme–colour synaesthesia (n = 723). Synaesthesia has been associated with a range of other neuropsychological traits, including enhanced memory and mental imagery, as well as greater sensory sensitivity. Motivated by the prior literature on putative trait overlaps, we investigated polygenic scores derived from published genome-wide scans of schizophrenia and autism spectrum disorder (ASD), comparing our SynGenes cohort to 2181 non-synaesthetic controls. We found a very slight association between schizophrenia polygenic scores and synaesthesia (Nagelkerke's R2 = 0.0047, empirical p = 0.0027) and no significant association for scores related to ASD (Nagelkerke's R2 = 0.00092, empirical p = 0.54) or body mass index (R2 = 0.00058, empirical p = 0.60), included as a negative control. As sample sizes for studying common genomic variation continue to increase, genetic investigations of the kind reported here may yield novel insights into the shared biology between synaesthesia and other traits, to complement findings from neuropsychology and brain imaging.

    Files private

    Request files

Share this page