Publications

Displaying 1001 - 1100 of 1289
  • Shao, Z., Roelofs, A., Martin, R., & Meyer, A. S. (2015). Selective inhibition and naming performance in semantic blocking, picture-word interference, and color-word stroop tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41, 1806-1820. doi:10.1037/a0039363.

    Abstract

    In two studies, we examined whether explicit distractors are necessary and sufficient toevoke selective inhibition in three naming tasks: the semantic blocking, picture-word interference, and color-word Stroop task. Delta plots were used to quantify the size of the interference effects as a function of reaction time (RT). Selective inhibition was operationalized as the decrease in the size of the interference effect as a function of naming RT. For all naming tasks, mean naming RTs were significantly longer in the interference condition than in a control condition. The slopes of the interference effects for the longest naming RTs correlated with the magnitude of the mean interference effect in both the semantic blocking task and the picture-word interference task, suggesting that selective inhibition was involved to reduce the interference from strong semantic competitors either invoked by a single explicit competitor or strong implicit competitors in picture naming. However, there was no correlation between the slopes and the mean interference effect in the Stroop task, suggesting less importance of selective inhibition in this task despite explicit distractors. Whereas the results of the semantic blocking task suggest that an explicit distractor is not necessary for triggering inhibition, the results of the Stroop task suggest that such a distractor is not sufficient for evoking inhibition either.
  • Shao, Z., Roelofs, A., & Meyer, A. S. (2014). Predicting naming latencies for action pictures: Dutch norms. Behavior Research Methods, 46, 274-283. doi:10.3758/s13428-013-0358-6.

    Abstract

    The present study provides Dutch norms for age of acquisition, familiarity, imageability, image agreement, visual complexity, word frequency, and word length (in syllables) for 124 line drawings of actions. Ratings were obtained from 117 Dutch participants. Word frequency was determined on the basis of the SUBTLEX-NL corpus (Keuleers, Brysbaert, & New, Behavior Research Methods, 42, 643–650, 2010). For 104 of the pictures, naming latencies and name agreement were determined in a separate naming experiment with 74 native speakers of Dutch. The Dutch norms closely corresponded to the norms for British English. Multiple regression analysis showed that age of acquisition, imageability, image agreement, visual complexity, and name agreement were significant predictors of naming latencies, whereas word frequency and word length were not. Combined with the results of a principal-component analysis, these findings suggest that variables influencing the processes of conceptual preparation and lexical selection affect latencies more strongly than do variables influencing word-form encoding.

    Additional information

    Shao_Behav_Res_2013_Suppl_Mat.doc
  • Shao, Z., Janse, E., Visser, K., & Meyer, A. S. (2014). What do verbal fluency tasks measure? Predictors of verbal fluency performance in older adults. Frontiers in Psychology, 5: 772. doi:10.3389/fpsyg.2014.00772.

    Abstract

    This study examined the contributions of verbal ability and executive control to verbal fluency performance in older adults (n=82). Verbal fluency was assessed in letter and category fluency tasks, and performance on these tasks was related to indicators of vocabulary size, lexical access speed, updating, and inhibition ability. In regression analyses the number of words produced in both fluency tasks was predicted by updating ability, and the speed of the first response was predicted by vocabulary size and, for category fluency only, lexical access speed. These results highlight the hybrid character of both fluency tasks, which may limit their usefulness for research and clinical purposes.
  • Shayan, S., Ozturk, O., Bowerman, M., & Majid, A. (2014). Spatial metaphor in language can promote the development of cross-modal mappings in children. Developmental Science, 17(4), 636-643. doi:10.1111/desc.12157.

    Abstract

    Pitch is often described metaphorically: for example, Farsi and Turkish speakers use a ‘thickness’ metaphor (low sounds are ‘thick’ and high sounds are ‘thin’), while German and English speakers use a height metaphor (‘low’, ‘high’). This study examines how child and adult speakers of Farsi, Turkish, and German map pitch and thickness using a cross-modal association task. All groups, except for German children, performed significantly better than chance. German-speaking adults’ success suggests the pitch-to-thickness association can be learned by experience. But the fact that German children were at chance indicates that this learning takes time. Intriguingly, Farsi and Turkish children's performance suggests that learning cross-modal associations can be boosted through experience with consistent metaphorical mappings in the input language
  • Shitova, N. (2018). Electrophysiology of competition and adjustment in word and phrase production. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Shkaravska, O., Van Eekelen, M., & Tamalet, A. (2014). Collected size semantics for strict functional programs over general polymorphic lists. In U. Dal Lago, & R. Pena (Eds.), Foundational and Practical Aspects of Resource Analysis: Third International Workshop, FOPARA 2013, Bertinoro, Italy, August 29-31, 2013, Revised Selected Papers (pp. 143-159). Berlin: Springer.

    Abstract

    Size analysis can be an important part of heap consumption analysis. This paper is a part of ongoing work about typing support for checking output-on-input size dependencies for function definitions in a strict functional language. A significant restriction for our earlier results is that inner data structures (e.g. in a list of lists) all must have the same size. Here, we make a big step forwards by overcoming this limitation via the introduction of higher-order size annotations such that variate sizes of inner data structures can be expressed. In this way the analysis becomes applicable for general, polymorphic nested lists.
  • Shkaravska, O., & Van Eekelen, M. (2014). Univariate polynomial solutions of algebraic difference equations. Journal of Symbolic Computation, 60, 15-28. doi:10.1016/j.jsc.2013.10.010.

    Abstract

    Contrary to linear difference equations, there is no general theory of difference equations of the form G(P(x−τ1),…,P(x−τs))+G0(x)=0, with τi∈K, G(x1,…,xs)∈K[x1,…,xs] of total degree D⩾2 and G0(x)∈K[x], where K is a field of characteristic zero. This article is concerned with the following problem: given τi, G and G0, find an upper bound on the degree d of a polynomial solution P(x), if it exists. In the presented approach the problem is reduced to constructing a univariate polynomial for which d is a root. The authors formulate a sufficient condition under which such a polynomial exists. Using this condition, they give an effective bound on d, for instance, for all difference equations of the form G(P(x−a),P(x−a−1),P(x−a−2))+G0(x)=0 with quadratic G, and all difference equations of the form G(P(x),P(x−τ))+G0(x)=0 with G having an arbitrary degree.
  • Sicoli, M. A., Stivers, T., Enfield, N. J., & Levinson, S. C. (2015). Marked initial pitch in questions signals marked communicative function. Language and Speech, 58(2), 204-223. doi:10.1177/0023830914529247.

    Abstract

    In conversation, the initial pitch of an utterance can provide an early phonetic cue of the communicative function, the speech act, or the social action being implemented. We conducted quantitative acoustic measurements and statistical analyses of pitch in over 10,000 utterances, including 2512 questions, their responses, and about 5000 other utterances by 180 total speakers from a corpus of 70 natural conversations in 10 languages. We measured pitch at first prominence in a speaker’s utterance and discriminated utterances by language, speaker, gender, question form, and what social action is achieved by the speaker’s turn. Through applying multivariate logistic regression we found that initial pitch that significantly deviated from the speaker’s median pitch level was predictive of the social action of the question. In questions designed to solicit agreement with an evaluation rather than information, pitch was divergent from a speaker’s median predictably in the top 10% of a speakers range. This latter finding reveals a kind of iconicity in the relationship between prosody and social action in which a marked pitch correlates with a marked social action. Thus, we argue that speakers rely on pitch to provide an early signal for recipients that the question is not to be interpreted through its literal semantics but rather through an inference.
  • Sidnell, J., & Enfield, N. J. (2014). Deixis and the interactional foundations of reference. In Y. Huang (Ed.), The Oxford handbook of pragmatics.
  • Sidnell, J., Kockelman, P., & Enfield, N. J. (2014). Community and social life. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 481-483). Cambridge: Cambridge University Press.
  • Sidnell, J., Enfield, N. J., & Kockelman, P. (2014). Interaction and intersubjectivity. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 343-345). Cambridge: Cambridge University Press.
  • Sidnell, J., & Enfield, N. J. (2014). The ontology of action, in interaction. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 423-446). Cambridge: Cambridge University Press.
  • Sikora, K. (2018). Executive control in language production by adults and children with and without language impairment. PhD Thesis, Radboud University, Nijmegen, The Netherlands.

    Abstract

    The present study examined how the updating, inhibiting, and shifting abilities underlying executive control influence spoken noun-phrase production. Previous studies provided evidence that updating and inhibiting, but not shifting, influence picture naming response time (RT). However, little is known about the role of executive control in more complex forms of language production like generating phrases. We assessed noun-phrase production using picture description and a picture-word interference procedure. We measured picture description RT to assess length, distractor, and switch effects, which were assumed to reflect, respectively, the updating, inhibiting, and shifting abilities of adult participants. Moreover, for each participant we obtained scores on executive control tasks that measured verbal and nonverbal updating, nonverbal inhibiting, and nonverbal shifting. We found that both verbal and nonverbal updating scores correlated with the overall mean picture description RTs. Furthermore, the length effect in the RTs correlated with verbal but not nonverbal updating scores, while the distractor effect correlated with inhibiting scores. We did not find a correlation between the switch effect in the mean RTs and the shifting scores. However, the shifting scores correlated with the switch effect in the normal part of the underlying RT distribution. These results suggest that updating, inhibiting, and shifting each influence the speed of phrase production, thereby demonstrating a contribution of all three executive control abilities to language production.

    Additional information

    full text via Radboud Repository
  • Sikora, K., & Roelofs, A. (2018). Switching between spoken language-production tasks: the role of attentional inhibition and enhancement. Language, Cognition and Neuroscience, 33(7), 912-922. doi:10.1080/23273798.2018.1433864.

    Abstract

    Since Pillsbury [1908. Attention. London: Swan Sonnenschein & Co], the issue of whether attention operates through inhibition or enhancement has been on the scientific agenda. We examined whether overcoming previous attentional inhibition or enhancement is the source of asymmetrical switch costs in spoken noun-phrase production and colour-word Stroop tasks. In Experiment 1, using bivalent stimuli, we found asymmetrical costs in response times for switching between long and short phrases and between Stroop colour naming and reading. However, in Experiment 2, using bivalent stimuli for the weaker tasks (long phrases, colour naming) and univalent stimuli for the stronger tasks (short phrases, word reading), we obtained an asymmetrical switch cost for phrase production, but a symmetrical cost for Stroop. The switch cost evidence was quantified using Bayesian statistical analyses. Our findings suggest that switching between phrase types involves inhibition, whereas switching between colour naming and reading involves enhancement. Thus, the attentional mechanism depends on the language-production task involved. The results challenge theories of task switching that assume only one attentional mechanism, inhibition or enhancement, rather than both mechanisms.
  • Silva, S., Folia, V., Inácio, F., Castro, S. L., & Petersson, K. M. (2018). Modality effects in implicit artificial grammar learning: An EEG study. Brain Research, 1687, 50-59. doi:10.1016/j.brainres.2018.02.020.

    Abstract

    Recently, it has been proposed that sequence learning engages a combination of modality-specific operating networks and modality-independent computational principles. In the present study, we compared the behavioural and EEG outcomes of implicit artificial grammar learning in the visual vs. auditory modality. We controlled for the influence of surface characteristics of sequences (Associative Chunk Strength), thus focusing on the strictly structural aspects of sequence learning, and we adapted the paradigms to compensate for known frailties of the visual modality compared to audition (temporal presentation, fast presentation rate). The behavioural outcomes were similar across modalities. Favouring the idea of modality-specificity, ERPs in response to grammar violations differed in topography and latency (earlier and more anterior component in the visual modality), and ERPs in response to surface features emerged only in the auditory modality. In favour of modality-independence, we observed three common functional properties in the late ERPs of the two grammars: both were free of interactions between structural and surface influences, both were more extended in a grammaticality classification test than in a preference classification test, and both correlated positively and strongly with theta event-related-synchronization during baseline testing. Our findings support the idea of modality-specificity combined with modality-independence, and suggest that memory for visual vs. auditory sequences may largely contribute to cross-modal differences.
  • Silva, S., Branco, P., Barbosa, F., Marques-Teixeira, J., Petersson, K. M., & Castro, S. L. (2014). Musical phrase boundaries, wrap-up and the closure positive shift. Brain Research, 1585, 99-107. doi:10.1016/j.brainres.2014.08.025.

    Abstract

    We investigated global integration (wrap-up) processes at the boundaries of musical phrases by comparing the effects of well and non-well formed phrases on event-related potentials time-locked to two boundary points: the onset and the offset of the boundary pause. The Closure Positive Shift, which is elicited at the boundary offset, was not modulated by the quality of phrase structure (well vs. non-well formed). In contrast, the boundary onset potentials showed different patterns for well and non-well formed phrases. Our results contribute to specify the functional meaning of the Closure Positive Shift in music, shed light on the large-scale structural integration of musical input, and raise new hypotheses concerning shared resources between music and language.
  • Silva, S., Barbosa, F., Marques-Teixeira, J., Petersson, K. M., & Castro, S. L. (2014). You know when: Event-related potentials and theta/beat power indicate boundary prediction in music. Journal of Integrative Neuroscience, 13(1), 19-34. doi:10.1142/S0219635214500022.

    Abstract

    Neuroscientific and musicological approaches to music cognition indicate that listeners familiarized in the Western tonal tradition expect a musical phrase boundary at predictable time intervals. However, phrase boundary prediction processes in music remain untested. We analyzed event-related potentials (ERPs) and event-related induced power changes at the onset and offset of a boundary pause. We made comparisons with modified melodies, where the pause was omitted and filled by tones. The offset of the pause elicited a closure positive shift (CPS), indexing phrase boundary detection. The onset of the filling tones elicited significant increases in theta and beta powers. In addition, the P2 component was larger when the filling tones started than when they ended. The responses to boundary omission suggest that listeners expected to hear a boundary pause. Therefore, boundary prediction seems to coexist with boundary detection in music segmentation.
  • Simanova, I. (2014). In search of conceptual representations in the brain: Towards mind-reading. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Simanova, I., Hagoort, P., Oostenveld, R., & Van Gerven, M. A. J. (2014). Modality-independent decoding of semantic information from the human brain. Cerebral Cortex, 24, 426-434. doi:10.1093/cercor/bhs324.

    Abstract

    An ability to decode semantic information from fMRI spatial patterns has been demonstrated in previous studies mostly for 1 specific input modality. In this study, we aimed to decode semantic category independent of the modality in which an object was presented. Using a searchlight method, we were able to predict the stimulus category from the data while participants performed a semantic categorization task with 4 stimulus modalities (spoken and written names, photographs, and natural sounds). Significant classification performance was achieved in all 4 modalities. Modality-independent decoding was implemented by training and testing the searchlight method across modalities. This allowed the localization of those brain regions, which correctly discriminated between the categories, independent of stimulus modality. The analysis revealed large clusters of voxels in the left inferior temporal cortex and in frontal regions. These voxels also allowed category discrimination in a free recall session where subjects recalled the objects in the absence of external stimuli. The results show that semantic information can be decoded from the fMRI signal independently of the input modality and have clear implications for understanding the functional mechanisms of semantic memory.
  • Simanova, I., Van Gerven, M. A., Oostenveld, R., & Hagoort, P. (2015). Predicting the semantic category of internally generated words from neuromagnetic recordings. Journal of Cognitive Neuroscience, 27(1), 35-45. doi:10.1162/jocn_a_00690.

    Abstract

    In this study, we explore the possibility to predict the semantic category of words from brain signals in a free word generation task. Participants produced single words from different semantic categories in a modified semantic fluency task. A Bayesian logistic regression classifier was trained to predict the semantic category of words from single-trial MEG data. Significant classification accuracies were achieved using sensor-level MEG time series at the time interval of conceptual preparation. Semantic category prediction was also possible using source-reconstructed time series, based on minimum norm estimates of cortical activity. Brain regions that contributed most to classification on the source level were identified. These were the left inferior frontal gyrus, left middle frontal gyrus, and left posterior middle temporal gyrus. Additionally, the temporal dynamics of brain activity underlying the semantic preparation during word generation was explored. These results provide important insights about central aspects of language production
  • Simon, E., & Sjerps, M. J. (2014). Developing non-native vowel representations: a study on child second language acquisition. COPAL: Concordia Working Papers in Applied Linguistics, 5, 693-708.

    Abstract

    This study examines what stage 9‐12‐year‐old Dutch‐speaking children have reached in the development of their L2 lexicon, focusing on its phonological specificity. Two experiments were carried out with a group of Dutch‐speaking children and adults learning English. In a first task, listeners were asked to judge Dutch words which were presented with either the target Dutch vowel or with an English vowel synthetically inserted. The second experiment was a mirror of the first, i.e. with English words and English or Dutch vowels inserted. It was examined to what extent the listeners accepted substitutions of Dutch vowels by English ones, and vice versa. The results of the experiments suggest that the children have not reached the same degree of phonological specificity of L2 words as the adults. Children not only experience a strong influence of their native vowel categories when listening to L2 words, they also apply less strict criteria.
  • Simon, E., Sjerps, M. J., & Fikkert, P. (2014). Phonological representations in children’s native and non-native lexicon. Bilingualism: Language and Cognition, 17(1), 3-21. doi:10.1017/S1366728912000764.

    Abstract

    This study investigated the phonological representations of vowels in children's native and non-native lexicons. Two experiments were mispronunciation tasks (i.e., a vowel in words was substituted by another vowel from the same language). These were carried out by Dutch-speaking 9–12-year-old children and Dutch-speaking adults, in their native (Experiment 1, Dutch) and non-native (Experiment 2, English) language. A third experiment tested vowel discrimination. In Dutch, both children and adults could accurately detect mispronunciations. In English, adults, and especially children, detected substitutions of native vowels (i.e., vowels that are present in the Dutch inventory) by non-native vowels more easily than changes in the opposite direction. Experiment 3 revealed that children could accurately discriminate most of the vowels. The results indicate that children's L1 categories strongly influenced their perception of English words. However, the data also reveal a hint of the development of L2 phoneme categories.

    Additional information

    Simon_SuppMaterial.pdf
  • Simpson, N. H., Ceroni, F., Reader, R. H., Covill, L. E., Knight, J. C., the SLI Consortium, Hennessy, E. R., Bolton, P. F., Conti-Ramsden, G., O’Hare, A., Baird, G., Fisher, S. E., & Newbury, D. F. (2015). Genome-wide analysis identifies a role for common copy number variants in specific language impairment. European Journal of Human Genetics, 23, 1370-1377. doi:10.1038/ejhg.2014.296.

    Abstract

    An exploratory genome-wide copy number variant (CNV) study was performed in 127 independent cases with specific language impairment (SLI), their first-degree relatives (385 individuals) and 269 population controls. Language-impaired cases showed an increased CNV burden in terms of the average number of events (11.28 vs 10.01, empirical P=0.003), the total length of CNVs (717 vs 513 Kb, empirical P=0.0001), the average CNV size (63.75 vs 51.6 Kb, empirical P=0.0005) and the number of genes spanned (14.29 vs 10.34, empirical P=0.0007) when compared with population controls, suggesting that CNVs may contribute to SLI risk. A similar trend was observed in first-degree relatives regardless of affection status. The increased burden found in our study was not driven by large or de novo events, which have been described as causative in other neurodevelopmental disorders. Nevertheless, de novo CNVs might be important on a case-by-case basis, as indicated by identification of events affecting relevant genes, such as ACTR2 and CSNK1A1, and small events within known micro-deletion/-duplication syndrome regions, such as chr8p23.1. Pathway analysis of the genes present within the CNVs of the independent cases identified significant overrepresentation of acetylcholine binding, cyclic-nucleotide phosphodiesterase activity and MHC proteins as compared with controls. Taken together, our data suggest that the majority of the risk conferred by CNVs in SLI is via common, inherited events within a ‘common disorder–common variant’ model. Therefore the risk conferred by CNVs will depend upon the combination of events inherited (both CNVs and SNPs), the genetic background of the individual and the environmental factors.

    Additional information

    ejhg2014296x1.pdf ejhg2014296x2.pdf
  • Simpson, N. H., Addis, L., Brandler, W. M., Slonims, V., Clark, A., Watson, J., Scerri, T. S., Hennessy, E. R., Stein, J., Talcott, J., Conti-Ramsden, G., O'Hare, A., Baird, G., Fairfax, B. P., Knight, J. C., Paracchini, S., Fisher, S. E., Newbury, D. F., & The SLI Consortium (2014). Increased prevalence of sex chromosome aneuploidies in specific language impairment and dyslexia. Developmental Medicine and Child Neurology, 56, 346-353. doi:10.1111/dmcn.12294.

    Abstract

    Aim Sex chromosome aneuploidies increase the risk of spoken or written language disorders but individuals with specific language impairment (SLI) or dyslexia do not routinely undergo cytogenetic analysis. We assess the frequency of sex chromosome aneuploidies in individuals with language impairment or dyslexia. Method Genome-wide single nucleotide polymorphism genotyping was performed in three sample sets: a clinical cohort of individuals with speech and language deficits (87 probands: 61 males, 26 females; age range 4 to 23 years), a replication cohort of individuals with SLI, from both clinical and epidemiological samples (209 probands: 139 males, 70 females; age range 4 to 17 years), and a set of individuals with dyslexia (314 probands: 224 males, 90 females; age range 7 to 18 years). Results In the clinical language-impaired cohort, three abnormal karyotypic results were identified in probands (proband yield 3.4%). In the SLI replication cohort, six abnormalities were identified providing a consistent proband yield (2.9%). In the sample of individuals with dyslexia, two sex chromosome aneuploidies were found giving a lower proband yield of 0.6%. In total, two XYY, four XXY (Klinefelter syndrome), three XXX, one XO (Turner syndrome), and one unresolved karyotype were identified. Interpretation The frequency of sex chromosome aneuploidies within each of the three cohorts was increased over the expected population frequency (approximately 0.25%) suggesting that genetic testing may prove worthwhile for individuals with language and literacy problems and normal non-verbal IQ. Early detection of these aneuploidies can provide information and direct the appropriate management for individuals.
  • Sjerps, M. J., & Reinisch, E. (2015). Divide and conquer: How perceptual contrast sensitivity and perceptual learning cooperate in reducing input variation in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 41(3), 710-722. doi:10.1037/a0039028.

    Abstract

    Listeners have to overcome variability of the speech signal that can arise, for example, because of differences in room acoustics, differences in speakers’ vocal tract properties, or idiosyncrasies in pronunciation. Two mechanisms that are involved in resolving such variation are perceptually contrastive effects that arise from surrounding acoustic context and lexically guided perceptual learning. Although both processes have been studied in great detail, little attention has been paid to how they operate relative to each other in speech perception. The present study set out to address this issue. The carrier parts of exposure stimuli of a classical perceptual learning experiment were spectrally filtered such that the acoustically ambiguous final fricatives sounded relatively more like the lexically intended sound (Experiment 1) or the alternative (Experiment 2). Perceptual learning was found only in the latter case. The findings show that perceptual contrast effects precede lexically guided perceptual learning, at least in terms of temporal order, and potentially in terms of cognitive processing levels as well
  • Sjerps, M. J., Zhang, C., & Peng, G. (2018). Lexical Tone is Perceived Relative to Locally Surrounding Context, Vowel Quality to Preceding Context. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 914-924. doi:10.1037/xhp0000504.

    Abstract

    Important speech cues such as lexical tone and vowel quality are perceptually contrasted to the distribution of those same cues in surrounding contexts. However, it is unclear whether preceding and following contexts have similar influences, and to what extent those influences are modulated by the auditory history of previous trials. To investigate this, Cantonese participants labeled sounds from (a) a tone continuum (mid- to high-level), presented with a context that had raised or lowered F0 values and (b) a vowel quality continuum (/u/ to /o/), where the context had raised or lowered F1 values. Contexts with high or low F0/F1 were presented in separate blocks or intermixed in 1 block. Contexts were presented following (Experiment 1) or preceding the target continuum (Experiment 2). Contrastive effects were found for both tone and vowel quality (e.g., decreased F0 values in contexts lead to more high tone target judgments and vice versa). Importantly, however, lexical tone was only influenced by F0 in immediately preceding and following contexts. Vowel quality was only influenced by the F1 in preceding contexts, but this extended to contexts from preceding trials. Contextual influences on tone and vowel quality are qualitatively different, which has important implications for understanding the mechanism of context effects in speech perception.
  • Sjerps, M. J., & Meyer, A. S. (2015). Variation in dual-task performance reveals late initiation of speech planning in turn-taking. Cognition, 136, 304-324. doi:10.1016/j.cognition.2014.10.008.

    Abstract

    The smooth transitions between turns in natural conversation suggest that speakers often begin to plan their utterances while listening to their interlocutor. The presented study investigates whether this is indeed the case and, if so, when utterance planning begins. Two hypotheses were contrasted: that speakers begin to plan their turn as soon as possible (in our experiments less than a second after the onset of the interlocutor’s turn), or that they do so close to the end of the interlocutor’s turn. Turn-taking was combined with a finger tapping task to measure variations in cognitive load. We assumed that the onset of speech planning in addition to listening would be accompanied by deterioration in tapping performance. Two picture description experiments were conducted. In both experiments there were three conditions: (1) Tapping and Speaking, where participants tapped a complex pattern while taking over turns from a pre-recorded speaker, (2) Tapping and Listening, where participants carried out the tapping task while overhearing two pre-recorded speakers, and (3) Speaking Only, where participants took over turns as in the Tapping and Speaking condition but without tapping. The experiments differed in the amount of tapping training the participants received at the beginning of the session. In Experiment 2, the participants’ eye-movements were recorded in addition to their speech and tapping. Analyses of the participants’ tapping performance and eye movements showed that they initiated the cognitively demanding aspects of speech planning only shortly before the end of the turn of the preceding speaker. We argue that this is a smart planning strategy, which may be the speakers’ default in many everyday situations.
  • Sleegers, K., Bettens, K., De Roeck, A., Van Cauwenberghe, C., Cuyvers, E., Verheijen, J., Struyfs, H., Van Dongen, J., Vermeulen, S., Engelborghs, S., Vandenbulcke, M., Vandenberghe, R., De Deyn, P., Van Broeckhoven, C., & BELNEU consortium (2015). A 22-single nucleotide polymorphism Alzheimer's disease risk score correlates with family history, onset age, and cerebrospinal fluid Aβ42. Alzheimer's & Dementia, 11(12), 1452-1460. doi:10.1016/j.jalz.2015.02.013.

    Abstract

    Introduction The ability to identify individuals at increased genetic risk for Alzheimer's disease (AD) may streamline biomarker and drug trials and aid clinical and personal decision making. Methods We evaluated the discriminative ability of a genetic risk score (GRS) covering 22 published genetic risk loci for AD in 1162 Flanders-Belgian AD patients and 1019 controls and assessed correlations with family history, onset age, and cerebrospinal fluid (CSF) biomarkers (Aβ1-42, T-Tau, P-Tau181P). Results A GRS including all single nucleotide polymorphisms (SNPs) and age-specific APOE ε4 weights reached area under the curve (AUC) 0.70, which increased to AUC 0.78 for patients with familial predisposition. Risk of AD increased with GRS (odds ratio, 2.32 (95% confidence interval 2.08-2.58 per unit; P < 1.0e-15). Onset age and CSF Aβ1-42 decreased with increasing GRS (Ponset-age = 9.0e-11; PAβ = 8.9e-7). Discussion The discriminative ability of this 22-SNP GRS is still limited, but these data illustrate that incorporation of age-specific weights improves discriminative ability. GRS-phenotype correlations highlight the feasibility of identifying individuals at highest susceptibility. © 2015 The Authors.
  • Slobin, D. I., Ibarretxe-Antuñano, I., Kopecka, A., & Majid, A. (2014). Manners of human gait: A crosslinguistic event-naming study. Cognitive Linguistics, 25, 701-741. doi:10.1515/cog-2014-0061.

    Abstract

    Crosslinguistic studies of expressions of motion events have found that Talmy's binary typology of verb-framed and satellite-framed languages is reflected in language use. In particular, Manner of motion is relatively more elaborated in satellite-framed languages (e.g., in narrative, picture description, conversation, translation). The present research builds on previous controlled studies of the domain of human motion by eliciting descriptions of a wide range of manners of walking and running filmed in natural circumstances. Descriptions were elicited from speakers of two satellite-framed languages (English, Polish) and three verb-framed languages (French, Spanish, Basque). The sampling of events in this study resulted in four major semantic clusters for these five languages: walking, running, non-canonical gaits (divided into bounce-and-recoil and syncopated movements), and quadrupedal movement (crawling). Counts of verb types found a broad tendency for satellite-framed languages to show greater lexical diversity, along with substantial within group variation. Going beyond most earlier studies, we also examined extended descriptions of manner of movement, isolating types of manner. The following categories of manner were identified and compared: attitude of actor, rate, effort, posture, and motor patterns of legs and feet. Satellite-framed speakers tended to elaborate expressive manner verbs, whereas verb-framed speakers used modification to add manner to neutral motion verbs
  • Sloetjes, H. (2014). ELAN: Multimedia annotation application. In J. Durand, U. Gut, & G. Kristoffersen (Eds.), The Oxford Handbook of Corpus Phonology (pp. 305-320). Oxford: Oxford University Press.
  • Slone, L. K., Abney, D. H., Borjon, J. I., Chen, C.-h., Franchak, J. M., Pearcy, D., Suarez-Rivera, C., Xu, T. L., Zhang, Y., Smith, L. B., & Yu, C. (2018). Gaze in action: Head-mounted eye tracking of children's dynamic visual attention during naturalistic behavior. Journal of Visualized Experiments, (141): e58496. doi:10.3791/58496.

    Abstract

    Young children's visual environments are dynamic, changing moment-by-moment as children physically and visually explore spaces and objects and interact with people around them. Head-mounted eye tracking offers a unique opportunity to capture children's dynamic egocentric views and how they allocate visual attention within those views. This protocol provides guiding principles and practical recommendations for researchers using head-mounted eye trackers in both laboratory and more naturalistic settings. Head-mounted eye tracking complements other experimental methods by enhancing opportunities for data collection in more ecologically valid contexts through increased portability and freedom of head and body movements compared to screen-based eye tracking. This protocol can also be integrated with other technologies, such as motion tracking and heart-rate monitoring, to provide a high-density multimodal dataset for examining natural behavior, learning, and development than previously possible. This paper illustrates the types of data generated from head-mounted eye tracking in a study designed to investigate visual attention in one natural context for toddlers: free-flowing toy play with a parent. Successful use of this protocol will allow researchers to collect data that can be used to answer questions not only about visual attention, but also about a broad range of other perceptual, cognitive, and social skills and their development.
  • Slonimska, A., Ozyurek, A., & Campisi, E. (2015). Ostensive signals: markers of communicative relevance of gesture during demonstration to adults and children. In G. Ferré, & M. Tutton (Eds.), Proceedings of the 4th GESPIN - Gesture & Speech in Interaction Conference (pp. 217-222). Nantes: Universite of Nantes.

    Abstract

    Speakers adapt their speech and gestures in various ways for their audience. We investigated further whether they use
    ostensive signals (eye gaze, ostensive speech (e.g. like this, this) or a combination of both) in relation to their gestures
    when talking to different addressees, i.e., to another adult or a child in a multimodal demonstration task. While adults used
    more eye gaze towards their gestures with other adults than with children, they were more likely to use combined
    ostensive signals for children than for adults. Thus speakers mark the communicative relevance of their gestures with different types of ostensive signals and by taking different types of addressees into account.
  • De Smedt, K., Hinrichs, E., Meurers, D., Skadiņa, I., Sanford Pedersen, B., Navarretta, C., Bel, N., Lindén, K., Lopatková, M., Hajič, J., Andersen, G., & Lenkiewicz, P. (2014). CLARA: A new generation of researchers in common language resources and their applications. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 2166-2174).
  • De Smedt, F., Merchie, E., Barendse, M. T., Rosseel, Y., De Naeghel, J., & Van Keer, H. (2018). Cognitive and motivational challenges in writing: Studying the relation with writing performance across students' gender and achievement level. Reading Research Quarterly, 53(2), 249-272. doi:10.1002/rrq.193.

    Abstract

    Abstract In the past, several assessment reports on writing repeatedly showed that elementary school students do not develop the essential writing skills to be successful in school. In this respect, prior research has pointed to the fact that cognitive and motivational challenges are at the root of the rather basic level of elementary students' writing performance. Additionally, previous research has revealed gender and achievement-level differences in elementary students' writing. In view of providing effective writing instruction for all students to overcome writing difficulties, the present study provides more in-depth insight into (a) how cognitive and motivational challenges mediate and correlate with students' writing performance and (b) whether and how these relations vary for boys and girls and for writers of different achievement levels. In the present study, 1,577 fifth- and sixth-grade students completed questionnaires regarding their writing self-efficacy, writing motivation, and writing strategies. In addition, half of the students completed two writing tests, respectively focusing on the informational or narrative text genre. Based on multiple group structural equation modeling (MG-SEM), we put forward two models: a MG-SEM model for boys and girls and a MG-SEM model for low, average, and high achievers. The results underline the importance of studying writing models for different groups of students in order to gain more refined insight into the complex interplay between motivational and cognitive challenges related to students' writing performance.
  • Smeets, C. J. L. M., Jezierska, J., Watanabe, H., Duarri, A., Fokkens, M. R., Meijer, M., Zhou, Q., Yakovleva, T., Boddeke, E., den Dunnen, W., van Deursen, J., Bakalkin, G., Kampinga, H. H., van de Sluis, B., & S. Verbeek, D. (2015). Elevated mutant dynorphin A causes Purkinje cell loss and motor dysfunction in spinocerebellar ataxia type 23. Brain, 138(9), 2537-2552. doi:10.1093/brain/awv195.

    Abstract

    Spinocerebellar ataxia type 23 is caused by mutations in PDYN, which encodes the opioid neuropeptide precursor protein, prodynorphin. Prodynorphin is processed into the opioid peptides, α-neoendorphin, and dynorphins A and B, that normally exhibit opioid-receptor mediated actions in pain signalling and addiction. Dynorphin A is likely a mutational hotspot for spinocerebellar ataxia type 23 mutations, and in vitro data suggested that dynorphin A mutations lead to persistently elevated mutant peptide levels that are cytotoxic and may thus play a crucial role in the pathogenesis of spinocerebellar ataxia type 23. To further test this and study spinocerebellar ataxia type 23 in more detail, we generated a mouse carrying the spinocerebellar ataxia type 23 mutation R212W in PDYN. Analysis of peptide levels using a radioimmunoassay shows that these PDYNR212W mice display markedly elevated levels of mutant dynorphin A, which are associated with climber fibre retraction and Purkinje cell loss, visualized with immunohistochemical stainings. The PDYNR212W mice reproduced many of the clinical features of spinocerebellar ataxia type 23, with gait deficits starting at 3 months of age revealed by footprint pattern analysis, and progressive loss of motor coordination and balance at the age of 12 months demonstrated by declining performances on the accelerating Rotarod. The pathologically elevated mutant dynorphin A levels in the cerebellum coincided with transcriptionally dysregulated ionotropic and metabotropic glutamate receptors and glutamate transporters, and altered neuronal excitability. In conclusion, the PDYNR212W mouse is the first animal model of spinocerebellar ataxia type 23 and our work indicates that the elevated mutant dynorphin A peptide levels are likely responsible for the initiation and progression of the disease, affecting glutamatergic signalling, neuronal excitability, and motor performance. Our novel mouse model defines a critical role for opioid neuropeptides in spinocerebellar ataxia, and suggests that restoring the elevated mutant neuropeptide levels can be explored as a therapeutic intervention.
  • Smeets, C. J. L. M., & Verbeek, D. (2014). Review Cerebellar ataxia and functional genomics: Identifying the routes to cerebellar neurodegeneration. Biochimica et Biophysica Acta: BBA, 1842(10), 2030-2038. doi:10.1016/j.bbadis.2014.04.004.

    Abstract

    Cerebellar ataxias are progressive neurodegenerative disorders characterized by atrophy of the cerebellum leading to motor dysfunction, balance problems, and limb and gait ataxia. These include among others, the dominantly inherited spinocerebellar ataxias, recessive cerebellar ataxias such as Friedreich's ataxia, and X-linked cerebellar ataxias. Since all cerebellar ataxias display considerable overlap in their disease phenotypes, common pathological pathways must underlie the selective cerebellar neurodegeneration. Therefore, it is important to identify the molecular mechanisms and routes to neurodegeneration that cause cerebellar ataxia. In this review, we discuss the use of functional genomic approaches including whole-exome sequencing, genome-wide gene expression profiling, miRNA profiling, epigenetic profiling, and genetic modifier screens to reveal the underlying pathogenesis of various cerebellar ataxias. These approaches have resulted in the identification of many disease genes, modifier genes, and biomarkers correlating with specific stages of the disease. This article is part of a Special Issue entitled: From Genome to Function.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Examining strains and symptoms of the ‘Literacy Virus’: The effects of orthographic transparency on phonological processing in a connectionist model of reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    The effect of literacy on phonological processing has been described in terms of a virus that “infects all speech processing” (Frith, 1998). Empirical data has established that literacy leads to changes to the way in which phonological information is processed. Harm & Seidenberg (1999) demonstrated that a connectionist network trained to map between English orthographic and phonological representations display’s more componential phonological processing than a network trained only to stably represent the phonological forms of words. Within this study we use a similar model yet manipulate the transparency of orthographic-to-phonological mappings. We observe that networks trained on a transparent orthography are better at restoring phonetic features and phonemes. However, networks trained on non-transparent orthographies are more likely to restore corrupted phonological segments with legal, coarser linguistic units (e.g. onset, coda). Our study therefore provides an explicit description of how differences in orthographic transparency can lead to varying strains and symptoms of the ‘literacy virus’.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). A comprehensive model of spoken word recognition must be multimodal: Evidence from studies of language-mediated visual attention. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    When processing language, the cognitive system has access to information from a range of modalities (e.g. auditory, visual) to support language processing. Language mediated visual attention studies have shown sensitivity of the listener to phonological, visual, and semantic similarity when processing a word. In a computational model of language mediated visual attention, that models spoken word processing as the parallel integration of information from phonological, semantic and visual processing streams, we simulate such effects of competition within modalities. Our simulations raised untested predictions about stronger and earlier effects of visual and semantic similarity compared to phonological similarity around the rhyme of the word. Two visual world studies confirmed these predictions. The model and behavioral studies suggest that, during spoken word comprehension, multimodal information can be recruited rapidly to constrain lexical selection to the extent that phonological rhyme information may exert little influence on this process.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Modelling language – vision interactions in the hub and spoke framework. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes: Proceedings of the 13th Neural Computation and Psychology Workshop (NCPW13). (pp. 3-16). Singapore: World Scientific Publishing.

    Abstract

    Multimodal integration is a central characteristic of human cognition. However our understanding of the interaction between modalities and its influence on behaviour is still in its infancy. This paper examines the value of the Hub & Spoke framework (Plaut, 2002; Rogers et al., 2004; Dilkina et al., 2008; 2010) as a tool for exploring multimodal interaction in cognition. We present a Hub and Spoke model of language–vision information interaction and report the model’s ability to replicate a range of phonological, visual and semantic similarity word-level effects reported in the Visual World Paradigm (Cooper, 1974; Tanenhaus et al, 1995). The model provides an explicit connection between the percepts of language and the distribution of eye gaze and demonstrates the scope of the Hub-and-Spoke architectural framework by modelling new aspects of multimodal cognition.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Literacy effects on language and vision: Emergent effects from an amodal shared resource (ASR) computational model. Cognitive Psychology, 75, 28-54. doi:10.1016/j.cogpsych.2014.07.002.

    Abstract

    Learning to read and write requires an individual to connect additional orthographic representations to pre-existing mappings between phonological and semantic representations of words. Past empirical results suggest that the process of learning to read and write (at least in alphabetic languages) elicits changes in the language processing system, by either increasing the cognitive efficiency of mapping between representations associated with a word, or by changing the granularity of phonological processing of spoken language, or through a combination of both. Behavioural effects of literacy have typically been assessed in offline explicit tasks that have addressed only phonological processing. However, a recent eye tracking study compared high and low literate participants on effects of phonology and semantics in processing measured implicitly using eye movements. High literates’ eye movements were more affected by phonological overlap in online speech than low literates, with only subtle differences observed in semantics. We determined whether these effects were due to cognitive efficiency and/or granularity of speech processing in a multimodal model of speech processing – the amodal shared resource model (ASR, Smith, Monaghan, & Huettig, 2013). We found that cognitive efficiency in the model had only a marginal effect on semantic processing and did not affect performance for phonological processing, whereas fine-grained versus coarse-grained phonological representations in the model simulated the high/low literacy effects on phonological processing, suggesting that literacy has a focused effect in changing the grain-size of phonological mappings.
  • Smith, A. C. (2015). Modelling multimodal language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Smorenburg, L., Rodd, J., & Chen, A. (2015). The effect of explicit training on the prosodic production of L2 sarcasm by Dutch learners of English. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow, UK: University of Glasgow.

    Abstract

    Previous research [9] suggests that Dutch learners of (British) English are not able to express sarcasm prosodically in their L2. The present study investigates whether explicit training on the prosodic markers of sarcasm in English can improve learners’ realisation of sarcasm. Sarcastic speech was elicited in short simulated telephone conversations between Dutch advanced learners of English and a native British English-speaking ‘friend’ in two sessions, fourteen days apart. Between the two sessions, participants were trained by means of (1) a presentation, (2) directed independent practice, and (3) evaluation of participants’ production and individual feedback in small groups. L1 British English-speaking raters subsequently evaluated the degree of sarcastic sounding in the participants’ responses on a five-point scale. It was found that significantly higher sarcasm ratings were given to L2 learners’ production obtained after the training than that obtained before the training; explicit training on prosody has a positive effect on learners’ production of sarcasm.
  • Smulders, F. T. Y., Ten Oever, S., Donkers, F. C. L., Quaedflieg, C. W. E. M., & Van de Ven, V. (2018). Single-trial log transformation is optimal in frequency analysis of resting EEG alpha. European Journal of Neuroscience, 48(7), 2585-2598. doi:10.1111/ejn.13854.

    Abstract

    The appropriate definition and scaling of the magnitude of electroencephalogram (EEG) oscillations is an underdeveloped area. The aim of this study was to optimize the analysis of resting EEG alpha magnitude, focusing on alpha peak frequency and nonlinear transformation of alpha power. A family of nonlinear transforms, Box-Cox transforms, were applied to find the transform that (a) maximized a non-disputed effect: the increase in alpha magnitude when the eyes are closed (Berger effect), and (b) made the distribution of alpha magnitude closest to normal across epochs within each participant, or across participants. The transformations were performed either at the single epoch level or at the epoch-average level. Alpha peak frequency showed large individual differences, yet good correspondence between various ways to estimate it in 2min of eyes-closed and 2min of eyes-open resting EEG data. Both alpha magnitude and the Berger effect were larger for individual alpha than for a generic (8-12Hz) alpha band. The log-transform on single epochs (a) maximized the t-value of the contrast between the eyes-open and eyes-closed conditions when tested within each participant, and (b) rendered near-normally distributed alpha power across epochs and participants, thereby making further transformation of epoch averages superfluous. The results suggest that the log-normal distribution is a fundamental property of variations in alpha power across time in the order of seconds. Moreover, effects on alpha power appear to be multiplicative rather than additive. These findings support the use of the log-transform on single epochs to achieve appropriate scaling of alpha magnitude.
  • Snijders Blok, L., Rousseau, J., Twist, J., Ehresmann, S., Takaku, M., Venselaar, H., Rodan, L. H., Nowak, C. B., Douglas, J., Swoboda, K. J., Steeves, M. A., Sahai, I., Stumpel, C. T. R. M., Stegmann, A. P. A., Wheeler, P., Willing, M., Fiala, E., Kochhar, A., Gibson, W. T., Cohen, A. S. A. and 59 moreSnijders Blok, L., Rousseau, J., Twist, J., Ehresmann, S., Takaku, M., Venselaar, H., Rodan, L. H., Nowak, C. B., Douglas, J., Swoboda, K. J., Steeves, M. A., Sahai, I., Stumpel, C. T. R. M., Stegmann, A. P. A., Wheeler, P., Willing, M., Fiala, E., Kochhar, A., Gibson, W. T., Cohen, A. S. A., Agbahovbe, R., Innes, A. M., Au, P. Y. B., Rankin, J., Anderson, I. J., Skinner, S. A., Louie, R. J., Warren, H. E., Afenjar, A., Keren, B., Nava, C., Buratti, J., Isapof, A., Rodriguez, D., Lewandowski, R., Propst, J., Van Essen, T., Choi, M., Lee, S., Chae, J. H., Price, S., Schnur, R. E., Douglas, G., Wentzensen, I. M., Zweier, C., Reis, A., Bialer, M. G., Moore, C., Koopmans, M., Brilstra, E. H., Monroe, G. R., Van Gassen, K. L. I., Van Binsbergen, E., Newbury-Ecob, R., Bownass, L., Bader, I., Mayr, J. A., Wortmann, S. B., Jakielski, K. J., Strand, E. A., Kloth, K., Bierhals, T., The DDD study, Roberts, J. D., Petrovich, R. M., Machida, S., Kurumizaka, H., Lelieveld, S., Pfundt, R., Jansen, S., Derizioti, P., Faivre, L., Thevenon, J., Assoum, M., Shriberg, L., Kleefstra, T., Brunner, H. G., Wade, P. A., Fisher, S. E., & Campeau, P. M. (2018). CHD3 helicase domain mutations cause a neurodevelopmental syndrome with macrocephaly and impaired speech and language. Nature Communications, 9: 4619. doi:10.1038/s41467-018-06014-6.

    Abstract

    Chromatin remodeling is of crucial importance during brain development. Pathogenic
    alterations of several chromatin remodeling ATPases have been implicated in neurodevelopmental
    disorders. We describe an index case with a de novo missense mutation in CHD3,
    identified during whole genome sequencing of a cohort of children with rare speech disorders.
    To gain a comprehensive view of features associated with disruption of this gene, we use a
    genotype-driven approach, collecting and characterizing 35 individuals with de novo CHD3
    mutations and overlapping phenotypes. Most mutations cluster within the ATPase/helicase
    domain of the encoded protein. Modeling their impact on the three-dimensional structure
    demonstrates disturbance of critical binding and interaction motifs. Experimental assays with
    six of the identified mutations show that a subset directly affects ATPase activity, and all but
    one yield alterations in chromatin remodeling. We implicate de novo CHD3 mutations in a
    syndrome characterized by intellectual disability, macrocephaly, and impaired speech and
    language.
  • Snijders Blok, L., Hiatt, S. M., Bowling, K. M., Prokop, J. W., Engel, K. L., Cochran, J. N., Bebin, E. M., Bijlsma, E. K., Ruivenkamp, C. A. L., Terhal, P., Simon, M. E. H., Smith, R., Hurst, J. A., The DDD study, MCLaughlin, H., Person, R., Crunk, A., Wangler, M. F., Streff, H., Symonds, J. D., Zuberi, S. M. and 11 moreSnijders Blok, L., Hiatt, S. M., Bowling, K. M., Prokop, J. W., Engel, K. L., Cochran, J. N., Bebin, E. M., Bijlsma, E. K., Ruivenkamp, C. A. L., Terhal, P., Simon, M. E. H., Smith, R., Hurst, J. A., The DDD study, MCLaughlin, H., Person, R., Crunk, A., Wangler, M. F., Streff, H., Symonds, J. D., Zuberi, S. M., Elliott, K. S., Sanders, V. R., Masunga, A., Hopkin, R. J., Dubbs, H. A., Ortiz-Gonzalez, X. R., Pfundt, R., Brunner, H. G., Fisher, S. E., Kleefstra, T., & Cooper, G. M. (2018). De novo mutations in MED13, a component of the Mediator complex, are associated with a novel neurodevelopmental disorder. Human Genetics, 137(5), 375-388. doi:10.1007/s00439-018-1887-y.

    Abstract

    Many genetic causes of developmental delay and/or intellectual disability (DD/ID) are extremely rare, and robust discovery of these requires both large-scale DNA sequencing and data sharing. Here we describe a GeneMatcher collaboration which led to a cohort of 13 affected individuals harboring protein-altering variants, 11 of which are de novo, in MED13; the only inherited variant was transmitted to an affected child from an affected mother. All patients had intellectual disability and/or developmental delays, including speech delays or disorders. Other features that were reported in two or more patients include autism spectrum disorder, attention deficit hyperactivity disorder, optic nerve abnormalities, Duane anomaly, hypotonia, mild congenital heart abnormalities, and dysmorphisms. Six affected individuals had mutations that are predicted to truncate the MED13 protein, six had missense mutations, and one had an in-frame-deletion of one amino acid. Out of the seven non-truncating mutations, six clustered in two specific locations of the MED13 protein: an N-terminal and C-terminal region. The four N-terminal clustering mutations affect two adjacent amino acids that are known to be involved in MED13 ubiquitination and degradation, p.Thr326 and p.Pro327. MED13 is a component of the CDK8-kinase module that can reversibly bind Mediator, a multi-protein complex that is required for Polymerase II transcription initiation. Mutations in several other genes encoding subunits of Mediator have been previously shown to associate with DD/ID, including MED13L, a paralog of MED13. Thus, our findings add MED13 to the group of CDK8-kinase module-associated disease genes
  • Sonnweber, R., Ravignani, A., & Fitch, W. T. (2015). Non-adjacent visual dependency learning in chimpanzees. Animal Cognition, 18(3), 733-745. doi:10.1007/s10071-015-0840-x.

    Abstract

    Humans have a strong proclivity for structuring and patterning stimuli: Whether in space or time, we tend to mentally order stimuli in our environment and organize them into units with specific types of relationships. A crucial prerequisite for such organization is the cognitive ability to discern and process regularities among multiple stimuli. To investigate the evolutionary roots of this cognitive capacity, we tested chimpanzees—which, along with bonobos, are our closest living relatives—for simple, variable distance dependency processing in visual patterns. We trained chimpanzees to identify pairs of shapes either linked by an arbitrary learned association (arbitrary associative dependency) or a shared feature (same shape, feature-based dependency), and to recognize strings where items related to either of these ways occupied the first (leftmost) and the last (rightmost) item of the stimulus. We then probed the degree to which subjects generalized this pattern to new colors, shapes, and numbers of interspersed items. We found that chimpanzees can learn and generalize both types of dependency rules, indicating that the ability to encode both feature-based and arbitrary associative regularities over variable distances in the visual domain is not a human prerogative. Our results strongly suggest that these core components of human structural processing were already present in our last common ancestor with chimpanzees.

    Additional information

    supplementary material
  • Sonnweber, R. S., Ravignani, A., Stobbe, N., Schiestl, G., Wallner, B., & Fitch, W. T. (2015). Rank‐dependent grooming patterns and cortisol alleviation in Barbary macaques. American Journal of Primatology, 77(6), 688-700. doi:10.1002/ajp.22391.

    Abstract

    Flexibly adapting social behavior to social and environmental challenges helps to alleviate glucocorticoid (GC) levels, which may have positive fitness implications for an individual. For primates, the predominant social behavior is grooming. Giving grooming to others is particularly efficient in terms of GC mitigation. However, grooming is confined by certain limitations such as time constraints or restricted access to other group members. For instance, dominance hierarchies may impact grooming partner availability in primate societies. Consequently specific grooming patterns emerge. In despotic species focusing grooming activity on preferred social partners significantly ameliorates GC levels in females of all ranks. In this study we investigated grooming patterns and GC management in Barbary macaques, a comparably relaxed species. We monitored changes in grooming behavior and cortisol (C) for females of different ranks. Our results show that the C‐amelioration associated with different grooming patterns had a gradual connection with dominance hierarchy: while higher‐ranking individuals showed lowest urinary C measures when they focused their grooming on selected partners within their social network, lower‐ranking individuals expressed lowest C levels when dispersing their grooming activity evenly across their social partners. We argue that the relatively relaxed social style of Barbary macaque societies allows individuals to flexibly adapt grooming patterns, which is associated with rank‐specific GC management. Am. J. Primatol. 77:688–700, 2015
  • De Sousa, H., Langella, F., & Enfield, N. J. (2015). Temperature terms in Lao, Southern Zhuang, Southern Pinghua and Cantonese. In M. Koptjevskaja-Tamm (Ed.), The linguistics of temperature (pp. 594-638). Amsterdam: Benjamins.
  • Spada, D., Verga, L., Iadanza, A., Tettamanti, M., & Perani, D. (2014). The auditory scene: An fMRI study on melody and accompaniment in professional pianists. NeuroImage, 102(2), 764-775. doi:10.1016/j.neuroimage.2014.08.036.

    Abstract

    The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes.
  • Spaeth, J. M., Hunter, C. S., Bonatakis, L., Guo, M., French, C. A., Slack, I., Hara, M., Fisher, S. E., Ferrer, J., Morrisey, E. E., Stanger, B. Z., & Stein, R. (2015). The FOXP1, FOXP2 and FOXP4 transcription factors are required for islet alpha cell proliferation and function in mice. Diabetologia, 58, 1836-1844. doi:10.1007/s00125-015-3635-3.

    Abstract

    Aims/hypothesis Several forkhead box (FOX) transcription factor family members have important roles in controlling pancreatic cell fates and maintaining beta cell mass and function, including FOXA1, FOXA2 and FOXM1. In this study we have examined the importance of FOXP1, FOXP2 and FOXP4 of the FOXP subfamily in islet cell development and function. Methods Mice harbouring floxed alleles for Foxp1, Foxp2 and Foxp4 were crossed with pan-endocrine Pax6-Cre transgenic mice to generate single and compound Foxp mutant mice. Mice were monitored for changes in glucose tolerance by IPGTT, serum insulin and glucagon levels by radioimmunoassay, and endocrine cell development and proliferation by immunohistochemistry. Gene expression and glucose-stimulated hormone secretion experiments were performed with isolated islets. Results Only the triple-compound Foxp1/2/4 conditional knockout (cKO) mutant had an overt islet phenotype, manifested physiologically by hypoglycaemia and hypoglucagonaemia. This resulted from the reduction in glucagon-secreting alpha cell mass and function. The proliferation of alpha cells was profoundly reduced in Foxp1/2/4 cKO islets through the effects on mediators of replication (i.e. decreased Ccna2, Ccnb1 and Ccnd2 activators, and increased Cdkn1a inhibitor). Adult islet Foxp1/2/4 cKO beta cells secrete insulin normally while the remaining alpha cells have impaired glucagon secretion. Conclusions/interpretation Collectively, these findings reveal an important role for the FOXP1, 2, and 4 proteins in governing postnatal alpha cell expansion and function.
  • Spapé, M., Verdonschot, R. G., Van Dantzig, S., & Van Steenbergen, H. (2014). The E-Primer: An introduction to creating psychological experiments in E-Prime®. Leiden: Leiden University Press.

    Abstract

    E-Prime, the software suite by Psychology Software Tools, is used for designing, developing and running custom psychological experiments. Aimed at students and researchers alike, this book provides a much needed, down-to-earth introduction into a wide range of experiments that can be set up using E-Prime. Many tutorials are provided to teach the reader how to develop experiments typical for the broad fields of psychological and cognitive science. Apart from explaining the basic structure of E-Prime and describing how it fits into daily scientific practice, this book also offers an introduction into programming using E-Prime’s own E-Basic language. The authors guide the readers step-by-step through the software, from an elementary to an advanced level, enabling them to benefit from the enormous possibilities for experimental design offered by E-Prime.
  • Speed, L. J., Wnuk, E., & Majid, A. (2018). Studying psycholinguistics out of the lab. In A. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 190-207). Hoboken: Wiley.

    Abstract

    Traditional psycholinguistic studies take place in controlled experimental labs and typically involve testing undergraduate psychology or linguistics students. Investigating psycholinguistics in this manner calls into question the external validity of findings, that is, the extent to which research findings generalize across languages and cultures, as well as ecologically valid settings. Here we consider three ways in which psycholinguistics can be taken out of the lab. First, researchers can conduct cross-cultural fieldwork in diverse languages and cultures. Second, they can conduct online experiments or experiments in institutionalized public spaces (e.g., museums) to obtain large, diverse participant samples. And, third, researchers can perform studies in more ecologically valid settings, to increase the real-world generalizability of findings. By moving away from the traditional lab setting, psycholinguists can enrich their understanding of language use in all its rich and diverse contexts.
  • Speed, L. J., & Majid, A. (2018). An exception to mental simulation: No evidence for embodied odor language. Cognitive Science, 42(4), 1146-1178. doi:10.1111/cogs.12593.

    Abstract

    Do we mentally simulate olfactory information? We investigated mental simulation of odors and sounds in two experiments. Participants retained a word while they smelled an odor or heard a sound, then rated odor/sound intensity and recalled the word. Later odor/sound recognition was also tested, and pleasantness and familiarity judgments were collected. Word recall was slower when the sound and sound-word mismatched (e.g., bee sound with the word typhoon). Sound recognition was higher when sounds were paired with a match or near-match word (e.g., bee sound with bee or buzzer). This indicates sound-words are mentally simulated. However, using the same paradigm no memory effects were observed for odor. Instead it appears odor-words only affect lexical-semantic representations, demonstrated by higher ratings of odor intensity and pleasantness when an odor was paired with a match or near-match word (e.g., peach odor with peach or mango). These results suggest fundamental differences in how odor and sound-words are represented.

    Additional information

    cogs12593-sup-0001-SupInfo.docx
  • Speed, L., & Majid, A. (2018). Music and odor in harmony: A case of music-odor synaesthesia. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 2527-2532). Austin, TX: Cognitive Science Society.

    Abstract

    We report an individual with music-odor synaesthesia who experiences automatic and vivid odor sensations when she hears music. S’s odor associations were recorded on two days, and compared with those of two control participants. Overall, S produced longer descriptions, and her associations were of multiple odors at once, in comparison to controls who typically reported a single odor. Although odor associations were qualitatively different between S and controls, ratings of the consistency of their descriptions did not differ. This demonstrates that crossmodal associations between music and odor exist in non-synaesthetes too. We also found that S is better at discriminating between odors than control participants, and is more likely to experience emotion, memories and evaluations triggered by odors, demonstrating the broader impact of her synaesthesia.

    Additional information

    link to conference website
  • Speed, L. J., & Majid, A. (2018). Superior olfactory language and cognition in odor-color synaesthesia. Journal of Experimental Psychology: Human Perception and Performance, 44(3), 468-481. doi:10.1037/xhp0000469.

    Abstract

    Olfaction is often considered a vestigial sense in humans, demoted throughout evolution to make way for the dominant sense of vision. This perspective on olfaction is reflected in how we think and talk about smells in the West, with odor imagery and odor language reported to be difficult. In the present study we demonstrate odor cognition is superior in odor-color synaesthesia, where there are additional sensory connections to odor concepts. Synaesthesia is a neurological phenomenon in which input in 1 modality leads to involuntary perceptual associations. Semantic accounts of synaesthesia posit synaesthetic associations are mediated by activation of inducing concepts. Therefore, synaesthetic associations may strengthen conceptual representations. To test this idea, we ran 6 odor-color synaesthetes and 17 matched controls on a battery of tasks exploring odor and color cognition. We found synaesthetes outperformed controls on tests of both odor and color discrimination, demonstrating for the first time enhanced perception in both the inducer (odor) and concurrent (color) modality. So, not only do synaesthetes have additional perceptual experiences in comparison to controls, their primary perceptual experience is also different. Finally, synaesthetes were more consistent and accurate at naming odors. We propose synaesthetic associations to odors strengthen odor concepts, making them more differentiated (facilitating odor discrimination) and easier to link with lexical representations (facilitating odor naming). In summary, we show for the first time that both odor language and perception is enhanced in people with synaesthetic associations to odors
  • Stassen, H., & Levelt, W. J. M. (1976). Systemen, automaten en grammatica's. In J. Michon, E. Eijkman, & L. De Klerk (Eds.), Handboek der psychonomie (pp. 100-127). Deventer: Van Loghum Slaterus.
  • Stergiakouli, E., Gaillard, R., Tavaré, J. M., Balthasar, N., Loos, R. J., Taal, H. R., Evans, D. M., Rivadeneira, F., St Pourcain, B., Uitterlinden, A. G., Kemp, J. P., Hofman, A., Ring, S. M., Cole, T. J., Jaddoe, V. W. V., Davey Smith, G., & Timpson, N. J. (2014). Genome-wide association study of height-adjusted BMI in childhood identifies functional variant in ADCY3. Obesity, 22(10), 2252-2259. doi:10.1002/oby.20840.

    Abstract

    OBJECTIVE: Genome-wide association studies (GWAS) of BMI are mostly undertaken under the assumption that "kg/m(2) " is an index of weight fully adjusted for height, but in general this is not true. The aim here was to assess the contribution of common genetic variation to a adjusted version of that phenotype which appropriately accounts for covariation in height in children. METHODS: A GWAS of height-adjusted BMI (BMI[x] = weight/height(x) ), calculated to be uncorrelated with height, in 5809 participants (mean age 9.9 years) from the Avon Longitudinal Study of Parents and Children (ALSPAC) was performed. RESULTS: GWAS based on BMI[x] yielded marked differences in genomewide results profile. SNPs in ADCY3 (adenylate cyclase 3) were associated at genome-wide significance level (rs11676272 (0.28 kg/m(3.1) change per allele G (0.19, 0.38), P = 6 × 10(-9) ). In contrast, they showed marginal evidence of association with conventional BMI [rs11676272 (0.25 kg/m(2) (0.15, 0.35), P = 6 × 10(-7) )]. Results were replicated in an independent sample, the Generation R study. CONCLUSIONS: Analysis of BMI[x] showed differences to that of conventional BMI. The association signal at ADCY3 appeared to be driven by a missense variant and it was strongly correlated with expression of this gene. Our work highlights the importance of well understood phenotype use (and the danger of convention) in characterising genetic contributions to complex traits.

    Additional information

    oby20840-sup-0001-suppinfo.docx
  • Stergiakouli, E., Martin, J., Hamshere, M. L., Langley, K., Evans, D. M., St Pourcain, B., Timpson, N. J., Owen, M. J., O'Donovan, M., Thapar, A., & Davey Smith, G. (2015). Shared Genetic Influences Between Attention-Deficit/Hyperactivity Disorder (ADHD) Traits in Children and Clinical ADHD. Journal of the American Academy of Child and Adolescent Psychiatry, 54(4), 322-327. doi:10.1016/j.jaac.2015.01.010.
  • Stine-Morrow, E., Payne, B., Roberts, B., Kramer, A., Morrow, D., Payne, L., Hill, P., Jackson, J., Gao, X., Noh, S., Janke, M., & Parisi, J. (2014). Training versus engagement as paths to cognitive enrichment with aging. Psychology and Aging, 29, 891-906. doi:10.1037/a0038244.

    Abstract

    While a training model of cognitive intervention targets the improvement of particular skills through instruction and practice, an engagement model is based on the idea that being embedded in an intellectually and socially complex environment can impact cognition, perhaps even broadly, without explicit instruction. We contrasted these 2 models of cognitive enrichment by randomly assigning healthy older adults to a home-based inductive reasoning training program, a team-based competitive program in creative problem solving, or a wait-list control. As predicted, those in the training condition showed selective improvement in inductive reasoning. Those in the engagement condition, on the other hand, showed selective improvement in divergent thinking, a key ability exercised in creative problem solving. On average, then, both groups appeared to show ability-specific effects. However, moderators of change differed somewhat for those in the engagement and training interventions. Generally, those who started either intervention with a more positive cognitive profile showed more cognitive growth, suggesting that cognitive resources enabled individuals to take advantage of environmental enrichment. Only in the engagement condition did initial levels of openness and social network size moderate intervention effects on cognition, suggesting that comfort with novelty and an ability to manage social resources may be additional factors contributing to the capacity to take advantage of the environmental complexity associated with engagement. Collectively, these findings suggest that training and engagement models may offer alternative routes to cognitive resilience in late life

    Files private

    Request files
  • Stoehr, A., Benders, T., Van Hell, J. G., & Fikkert, P. (2018). Heritage language exposure impacts voice onset time of Dutch–German simultaneous bilingual preschoolers. Bilingualism: Language and Cognition, 21(3), 598-617. doi:10.1017/S1366728917000116.

    Abstract

    This study assesses the effects of age and language exposure on VOT production in 29 simultaneous bilingual children aged 3;7 to 5;11 who speak German as a heritage language in the Netherlands. Dutch and German have a binary voicing contrast, but the contrast is implemented with different VOT values in the two languages. The results suggest that bilingual children produce ‘voiced’ plosives similarly in their two languages, and these productions are not monolingual-like in either language. Bidirectional cross-linguistic influence between Dutch and German can explain these results. Yet, the bilinguals seemingly have two autonomous categories for Dutch and German ‘voiceless’ plosives. In German, the bilinguals’ aspiration is not monolingual-like, but bilinguals with more heritage language exposure produce more target-like aspiration. Importantly, the amount of exposure to German has no effect on the majority language's ‘voiceless’ category. This implies that more heritage language exposure is associated with more language-specific voicing systems.
  • Stoehr, A. (2018). Speech production, perception, and input of simultaneous bilingual preschoolers: Evidence from voice onset time. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Stolk, A., Noordzij, M. L., Verhagen, L., Volman, I., Schoffelen, J.-M., Oostenveld, R., Hagoort, P., & Toni, I. (2014). Cerebral coherence between communicators marks the emergence of meaning. Proceedings of the National Academy of Sciences of the United States of America, 111, 18183-18188. doi:10.1073/pnas.1414886111.

    Abstract

    How can we understand each other during communicative interactions? An influential suggestion holds that communicators are primed by each other’s behaviors, with associative mechanisms automatically coordinating the production of communicative signals and the comprehension of their meanings. An alternative suggestion posits that mutual understanding requires shared conceptualizations of a signal’s use, i.e., “conceptual pacts” that are abstracted away from specific experiences. Both accounts predict coherent neural dynamics across communicators, aligned either to the occurrence of a signal or to the dynamics of conceptual pacts. Using coherence spectral-density analysis of cerebral activity simultaneously measured in pairs of communicators, this study shows that establishing mutual understanding of novel signals synchronizes cerebral dynamics across communicators’ right temporal lobes. This interpersonal cerebral coherence occurred only within pairs with a shared communicative history, and at temporal scales independent from signals’ occurrences. These findings favor the notion that meaning emerges from shared conceptualizations of a signal’s use.
  • Stolk, A., Griffin, S., Van der Meij, R., Dewar, C., Saez, I., Lin, J. J., Piantoni, G., Schoffelen, J.-M., Knight, R. T., & Oostenveld, R. (2018). Integrated analysis of anatomical and electrophysiological human intracranial data. Nature Protocols, 13, 1699-1723. doi:10.1038/s41596-018-0009-6.

    Abstract

    Human intracranial electroencephalography (iEEG) recordings provide data with much greater spatiotemporal precision
    than is possible from data obtained using scalp EEG, magnetoencephalography (MEG), or functional MRI. Until recently,
    the fusion of anatomical data (MRI and computed tomography (CT) images) with electrophysiological data and their
    subsequent analysis have required the use of technologically and conceptually challenging combinations of software.
    Here, we describe a comprehensive protocol that enables complex raw human iEEG data to be converted into more readily
    comprehensible illustrative representations. The protocol uses an open-source toolbox for electrophysiological data
    analysis (FieldTrip). This allows iEEG researchers to build on a continuously growing body of scriptable and reproducible
    analysis methods that, over the past decade, have been developed and used by a large research community. In this
    protocol, we describe how to analyze complex iEEG datasets by providing an intuitive and rapid approach that can handle
    both neuroanatomical information and large electrophysiological datasets. We provide a worked example using
    an example dataset. We also explain how to automate the protocol and adjust the settings to enable analysis of
    iEEG datasets with other characteristics. The protocol can be implemented by a graduate student or postdoctoral
    fellow with minimal MATLAB experience and takes approximately an hour to execute, excluding the automated cortical
    surface extraction.
  • Stolk, A., Noordzij, M. L., Volman, I., Verhagen, L., Overeem, S., van Elswijk, G., Bloem, B., Hagoort, P., & Toni, I. (2014). Understanding communicative actions: A repetitive TMS study. Cortex, 51, 25-34. doi:10.1016/j.cortex.2013.10.005.

    Abstract

    Despite the ambiguity inherent in human communication, people are remarkably efficient in establishing mutual understanding. Studying how people communicate in novel settings provides a window into the mechanisms supporting the human competence to rapidly generate and understand novel shared symbols, a fundamental property of human communication. Previous work indicates that the right posterior superior temporal sulcus (pSTS) is involved when people understand the intended meaning of novel communicative actions. Here, we set out to test whether normal functioning of this cerebral structure is required for understanding novel communicative actions using inhibitory low-frequency repetitive transcranial magnetic stimulation (rTMS). A factorial experimental design contrasted two tightly matched stimulation sites (right pSTS vs. left MT+, i.e. a contiguous homotopic task-relevant region) and tasks (a communicative task vs. a visual tracking task that used the same sequences of stimuli). Overall task performance was not affected by rTMS, whereas changes in task performance over time were disrupted according to TMS site and task combinations. Namely, rTMS over pSTS led to a diminished ability to improve action understanding on the basis of recent communicative history, while rTMS over MT+ perturbed improvement in visual tracking over trials. These findings qualify the contributions of the right pSTS to human communicative abilities, showing that this region might be necessary for incorporating previous knowledge, accumulated during interactions with a communicative partner, to constrain the inferential process that leads to action understanding.
  • Sulik, J. (2018). Cognitive mechanisms for inferring the meaning of novel signals during symbolisation. PLoS One, 13(1): e0189540. doi:10.1371/journal.pone.0189540.

    Abstract

    As participants repeatedly interact using graphical signals (as in a game of Pictionary), the signals gradually shift from being iconic (or motivated) to being symbolic (or arbitrary). The aim here is to test experimentally whether this change in the form of the signal implies a concomitant shift in the inferential mechanisms needed to understand it. The results show that, during early, iconic stages, there is more reliance on creative inferential processes associated with insight problem solving, and that the recruitment of these cognitive mechanisms decreases over time. The variation in inferential mechanism is not predicted by the sign’s visual complexity or iconicity, but by its familiarity, and by the complexity of the relevant mental representations. The discussion explores implications for pragmatics, language evolution, and iconicity research.
  • Sumer, B. (2015). Acquisition of spatial language by signing and speaking children: A comparison of Turkish Sign Language (TID) and Turkish. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Sumer, B., Perniss, P., Zwitserlood, I., & Ozyurek, A. (2014). Learning to express "left-right" & "front-behind" in a sign versus spoken language. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1550-1555). Austin, Tx: Cognitive Science Society.

    Abstract

    Developmental studies show that it takes longer for
    children learning spoken languages to acquire viewpointdependent
    spatial relations (e.g., left-right, front-behind),
    compared to ones that are not viewpoint-dependent (e.g.,
    in, on, under). The current study investigates how
    children learn to express viewpoint-dependent relations
    in a sign language where depicted spatial relations can be
    communicated in an analogue manner in the space in
    front of the body or by using body-anchored signs (e.g.,
    tapping the right and left hand/arm to mean left and
    right). Our results indicate that the visual-spatial
    modality might have a facilitating effect on learning to
    express these spatial relations (especially in encoding of
    left-right) in a sign language (i.e., Turkish Sign
    Language) compared to a spoken language (i.e.,
    Turkish).
  • De Swart, P., & Van Bergen, G. (2014). Unscrambling the lexical nature of weak definites. In A. Aguilar-Guevara, B. Le Bruyn, & J. Zwarts (Eds.), Weak referentiality (pp. 287-310). Amsterdam: Benjamins.

    Abstract

    We investigate how the lexical nature of weak definites influences the phenomenon of direct object scrambling in Dutch. Earlier experiments have indicated that weak definites are more resistant to scrambling than strong definites. We examine how the notion of weak definiteness used in this experimental work can be reduced to lexical connectedness. We explore four different ways of quantifying the relation between a direct object and the verb. Our results show that predictability of a verb given the object (verb cloze probability) provides the best fit to the weak/strong distinction used in the earlier experiments
  • Sweegers, C. C. G., Takashima, A., Fernández, G., & Talamini, L. M. (2015). Neural mechanisms supporting the extraction of general knowledge across episodic memories. NeuroImage, 87, 138-146. doi:10.1016/j.neuroimage.2013.10.063.

    Abstract

    General knowledge acquisition entails the extraction of statistical regularities from the environment. At high levels of complexity, this may involve the extraction, and consolidation, of associative regularities across event memories. The underlying neural mechanisms would likely involve a hippocampo-neocortical dialog, as proposed previously for system-level consolidation. To test these hypotheses, we assessed possible differences in consolidation between associative memories containing cross-episodic regularities and unique associative memories. Subjects learned face–location associations, half of which responded to complex regularities regarding the combination of facial features and locations, whereas the other half did not. Importantly, regularities could only be extracted over hippocampus-encoded, associative aspects of the items. Memory was assessed both immediately after encoding and 48 h later, under fMRI acquisition. Our results suggest that processes related to system-level reorganization occur preferentially for regular associations across episodes. Moreover, the build-up of general knowledge regarding regular associations appears to involve the coordinated activity of the hippocampus and mediofrontal regions. The putative cross-talk between these two regions might support a mechanism for regularity extraction. These findings suggest that the consolidation of cross-episodic regularities may be a key mechanism underlying general knowledge acquisition.
  • Takashima, A., Wagensveld, B., Van Turennout, M., Zwitserlood, P., Hagoort, P., & Verhoeven, L. (2014). Training-induced neural plasticity in visual-word decoding and the role of syllables. Neuropsychologia, 61, 299-314. doi:10.1016/j.neuropsychologia.2014.06.017.

    Abstract

    To investigate the neural underpinnings of word decoding, and how it changes as a function of repeated exposure, we trained Dutch participants repeatedly over the course of a month of training to articulate a set of novel disyllabic input strings written in Greek script to avoid the use of familiar orthographic representations. The syllables in the input were phonotactically legal combinations but non-existent in the Dutch language, allowing us to assess their role in novel word decoding. Not only trained disyllabic pseudowords were tested but also pseudowords with recombined patterns of syllables to uncover the emergence of syllabic representations. We showed that with extensive training, articulation became faster and more accurate for the trained pseudowords. On the neural level, the initial stage of decoding was reflected by increased activity in visual attention areas of occipito-temporal and occipito-parietal cortices, and in motor coordination areas of the precentral gyrus and the inferior frontal gyrus. After one month of training, memory representations for holistic information (whole word unit) were established in areas encompassing the angular gyrus, the precuneus and the middle temporal gyrus. Syllabic representations also emerged through repeated training of disyllabic pseudowords, such that reading recombined syllables of the trained pseudowords showed similar brain activation to trained pseudowords and were articulated faster than novel combinations of letter strings used in the trained pseudowords.
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2014). Richness of information about novel words influences how episodic and semantic memory networks interact during lexicalization. NeuroImage, 84, 265-278. doi:10.1016/j.neuroimage.2013.08.023.

    Abstract

    The complementary learning systems account of declarative memory suggests two distinct memory networks, a fast-mapping, episodic system involving the hippocampus, and a slower semantic memory system distributed across the neocortex in which new information is gradually integrated with existing representations. In this study, we investigated the extent to which these two networks are involved in the integration of novel words into the lexicon after extensive learning, and how the involvement of these networks changes after 24 hours. In particular, we explored whether having richer information at encoding influences the lexicalization trajectory. We trained participants with two sets of novel words, one where exposure was only to the words’ phonological forms (the form-only condition), and one where pictures of unfamiliar objects were associated with the words’ phonological forms (the picture-associated condition). A behavioral measure of lexical competition (indexing lexicalization) indicated stronger competition effects for the form-only words. Imaging (fMRI) results revealed greater involvement of phonological lexical processing areas immediately after training in the form-only condition, suggesting tight connections were formed between novel words and existing lexical entries already at encoding. Retrieval of picture-associated novel words involved the episodic/hippocampal memory system more extensively. Although lexicalization was weaker in the picture-associated condition, overall memory strength was greater when tested after a 24 hours’ delay, probably due to the availability of both episodic and lexical memory networks to aid retrieval. It appears that, during lexicalization of a novel word, the relative involvement of different memory networks differs according to the richness of the information about that word available at encoding.
  • Tamaoka, K., Saito, N., Kiyama, S., Timmer, K., & Verdonschot, R. G. (2014). Is pitch accent necessary for comprehension by native Japanese speakers? - An ERP investigation. Journal of Neurolinguistics, 27(1), 31-40. doi:10.1016/j.jneuroling.2013.08.001.

    Abstract

    Not unlike the tonal system in Chinese, Japanese habitually attaches pitch accents to the production of words. However, in contrast to Chinese, few homophonic word-pairs are really distinguished by pitch accents (Shibata & Shibata, 1990). This predicts that pitch accent plays a small role in lexical selection for Japanese language comprehension. The present study investigated whether native Japanese speakers necessarily use pitch accent in the processing of accent-contrasted homophonic pairs (e.g., ame [LH] for 'candy' and ame [HI] for 'rain') measuring electroencephalographic (EEG) potentials. Electrophysiological evidence (i.e., N400) was obtained when a word was semantically incorrect for a given context but not for incorrectly accented homophones. This suggests that pitch accent indeed plays a minor role when understanding Japanese. (C) 2013 Elsevier Ltd. All rights reserved.
  • Tamariz, M., Roberts, S. G., Martínez, J. I., & Santiago, J. (2018). The Interactive Origin of Iconicity. Cognitive Science, 42, 334-349. doi:10.1111/cogs.12497.

    Abstract

    We investigate the emergence of iconicity, specifically a bouba-kiki effect in miniature artificial languages under different functional constraints: when the languages are reproduced and when they are used communicatively. We ran transmission chains of (a) participant dyads who played an interactive communicative game and (b) individual participants who played a matched learning game. An analysis of the languages over six generations in an iterated learning experiment revealed that in the Communication condition, but not in the Reproduction condition, words for spiky shapes tend to be rated by naive judges as more spiky than the words for round shapes. This suggests that iconicity may not only be the outcome of innovations introduced by individuals, but, crucially, the result of interlocutor negotiation of new communicative conventions. We interpret our results as an illustration of cultural evolution by random mutation and selection (as opposed to by guided variation).
  • Tan, Y., & Martin, R. C. (2018). Verbal short-term memory capacities and executive function in semantic and syntactic interference resolution during sentence comprehension: Evidence from aphasia. Neuropsychologia, 113, 111-125. doi:10.1016/j.neuropsychologia.2018.03.001.

    Abstract

    This study examined the role of verbal short-term memory (STM) and executive function (EF) underlying semantic and syntactic interference resolution during sentence comprehension for persons with aphasia (PWA) with varying degrees of STM and EF deficits. Semantic interference was manipulated by varying the semantic plausibility of the intervening NP as subject of the verb and syntactic interference was manipulated by varying whether the NP was another subject or an object. Nine PWA were assessed on sentence reading times and on comprehension question performance. PWA showed exaggerated semantic and syntactic interference effects relative to healthy age-matched control subjects. Importantly, correlational analyses showed that while answering comprehension questions, PWA’ semantic STM capacity related to their ability to resolve semantic but not syntactic interference. In contrast, PWA’ EF abilities related to their ability to resolve syntactic but not semantic interference. Phonological STM deficits were not related to the ability to resolve either type of interference. The results for semantic STM are consistent with prior findings indicating a role for semantic but not phonological STM in sentence comprehension, specifically with regard to maintaining semantic information prior to integration. The results for syntactic interference are consistent with the recent findings suggesting that EF is critical for syntactic processing.
  • Tanner, D., Nicol, J., & Brehm, L. (2014). The time-course of feature interference in agreement comprehension: Multiple mechanisms and asymmetrical attraction. Journal of Memory and Language, 76, 195-215. doi:10.1016/j.jml.2014.07.003.

    Abstract

    Attraction interference in language comprehension and production may be as a result of common or different processes. In the present paper, we investigate attraction interference during language comprehension, focusing on the contexts in which interference arises and the time-course of these effects. Using evidence from event-related brain potentials (ERPs) and sentence judgment times, we show that agreement attraction in comprehension is best explained as morphosyntactic interference during memory retrieval. This stands in contrast to attraction as a message-level process involving the representation of the subject NP's number features, which is a strong contributor to attraction in production. We thus argue that the cognitive antecedents of agreement attraction in comprehension are non-identical with those of attraction in production, and moreover, that attraction in comprehension is primarily a consequence of similarity-based interference in cue-based memory retrieval processes. We suggest that mechanisms responsible for attraction during language comprehension are a subset of those involved in language production.
  • Tarenskeen, S., Broersma, M., & Geurts, B. (2015). Overspecification of color, pattern, and size: Salience, absoluteness, and consistency. Frontiers in Psychology, 6: 1703. doi:10.3389/fpsyg.2015.01703.

    Abstract

    The rates of overspecification of color, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Color and pattern are absolute and salient attributes, whereas size is relative and less salient. Additionally, a tendency toward consistent responses is assessed. Using a within-participants design, we find similar rates of color and pattern overspecification, which are both higher than the rate of size overspecification. Using a between-participants design, however, we find similar rates of pattern and size overspecification, which are both lower than the rate of color overspecification. This indicates that although many speakers are more likely to include color than pattern (probably because color is more salient), they may also treat pattern like color due to a tendency toward consistency. We find no increase in size overspecification when the salience of size is increased, suggesting that speakers are more likely to include absolute than relative attributes. However, we do find an increase in size overspecification when mentioning the attributes is triggered, which again shows that speakers tend to refer in a consistent manner, and that there are circumstances in which even size overspecification is frequently produced.
  • Teeling, E., Vernes, S. C., Davalos, L. M., Ray, D. A., Gilbert, M. T. P., Myers, E., & Bat1K Consortium (2018). Bat biology, genomes, and the Bat1K project: To generate chromosome-level genomes for all living bat species. Annual Review of Animal Biosciences, 6, 23-46. doi:10.1146/annurev-animal-022516-022811.

    Abstract

    Bats are unique among mammals, possessing some of the rarest mammalian adaptations, including true self-powered flight, laryngeal echolocation, exceptional longevity, unique immunity, contracted genomes, and vocal learning. They provide key ecosystem services, pollinating tropical plants, dispersing seeds, and controlling insect pest populations, thus driving healthy ecosystems. They account for more than 20% of all living mammalian diversity, and their crown-group evolutionary history dates back to the Eocene. Despite their great numbers and diversity, many species are threatened and endangered. Here we announce Bat1K, an initiative to sequence the genomes of all living bat species (n∼1,300) to chromosome-level assembly. The Bat1K genome consortium unites bat biologists (>132 members as of writing), computational scientists, conservation organizations, genome technologists, and any interested individuals committed to a better understanding of the genetic and evolutionary mechanisms that underlie the unique adaptations of bats. Our aim is to catalog the unique genetic diversity present in all living bats to better understand the molecular basis of their unique adaptations; uncover their evolutionary history; link genotype with phenotype; and ultimately better understand, promote, and conserve bats. Here we review the unique adaptations of bats and highlight how chromosome-level genome assemblies can uncover the molecular basis of these traits. We present a novel sequencing and assembly strategy and review the striking societal and scientific benefits that will result from the Bat1K initiative.
  • Tekcan, A. I., Yilmaz, E., Kaya Kızılö, B., Karadöller, D. Z., Mutafoğlu, M., & Erciyes, A. (2015). Retrieval and phenomenology of autobiographical memories in blind individuals. Memory, 23(3), 329-339. doi:10.1080/09658211.2014.886702.

    Abstract

    Although visual imagery is argued to be an essential component of autobiographical memory, there have been surprisingly few studies on autobiographical memory processes in blind individuals, who have had no or limited visual input. The purpose of the present study was to investigate how blindness affects retrieval and phenomenology of autobiographical memories. We asked 48 congenital/early blind and 48 sighted participants to recall autobiographical memories in response to six cue words, and to fill out the Autobiographical Memory Questionnaire measuring a number of variables including imagery, belief and recollective experience associated with each memory. Blind participants retrieved fewer memories and reported higher auditory imagery at retrieval than sighted participants. Moreover, within the blind group, participants with total blindness reported higher auditory imagery than those with some light perception. Blind participants also assigned higher importance, belief and recollection ratings to their memories than sighted participants. Importantly, these group differences remained the same for recent as well as childhood memories.
  • Ten Bosch, L., Ernestus, M., & Boves, L. (2018). Analyzing reaction time sequences from human participants in auditory experiments. In Proceedings of Interspeech 2018 (pp. 971-975). doi:10.21437/Interspeech.2018-1728.

    Abstract

    Sequences of reaction times (RT) produced by participants in an experiment are not only influenced by the stimuli, but by many other factors as well, including fatigue, attention, experience, IQ, handedness, etc. These confounding factors result in longterm effects (such as a participant’s overall reaction capability) and in short- and medium-time fluctuations in RTs (often referred to as ‘local speed effects’). Because stimuli are usually presented in a random sequence different for each participant, local speed effects affect the underlying ‘true’ RTs of specific trials in different ways across participants. To be able to focus statistical analysis on the effects of the cognitive process under study, it is necessary to reduce the effect of confounding factors as much as possible. In this paper we propose and compare techniques and criteria for doing so, with focus on reducing (‘filtering’) the local speed effects. We show that filtering matters substantially for the significance analyses of predictors in linear mixed effect regression models. The performance of filtering is assessed by the average between-participant correlation between filtered RT sequences and by Akaike’s Information Criterion, an important measure of the goodness-of-fit of linear mixed effect regression models.
  • Ten Bosch, L., Ernestus, M., & Boves, L. (2014). Comparing reaction time sequences from human participants and computational models. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 462-466).

    Abstract

    This paper addresses the question how to compare reaction times computed by a computational model of speech comprehension with observed reaction times by participants. The question is based on the observation that reaction time sequences substantially differ per participant, which raises the issue of how exactly the model is to be assessed. Part of the variation in reaction time sequences is caused by the so-called local speed: the current reaction time correlates to some extent with a number of previous reaction times, due to slowly varying variations in attention, fatigue etc. This paper proposes a method, based on time series analysis, to filter the observed reaction times in order to separate the local speed effects. Results show that after such filtering the between-participant correlations increase as well as the average correlation between participant and model increases. The presented technique provides insights into relevant aspects that are to be taken into account when comparing reaction time sequences
  • Ten Bosch, L., Boves, L., & Ernestus, M. (2015). DIANA, an end-to-end computational model of human word comprehension. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper presents DIANA, a new computational model of human speech processing. It is the first model that simulates the complete processing chain from the on-line processing of an acoustic signal to the execution of a response, including reaction times. Moreover it assumes minimal modularity. DIANA consists of three components. The activation component computes a probabilistic match between the input acoustic signal and representations in DIANA’s lexicon, resulting in a list of word hypotheses changing over time as the input unfolds. The decision component operates on this list and selects a word as soon as sufficient evidence is available. Finally, the execution component accounts for the time to execute a behavioral action. We show that DIANA well simulates the average participant in a word recognition experiment.
  • Ten Bosch, L., Boves, L., Tucker, B., & Ernestus, M. (2015). DIANA: Towards computational modeling reaction times in lexical decision in North American English. In Proceedings of Interspeech 2015: The 16th Annual Conference of the International Speech Communication Association (pp. 1576-1580).

    Abstract

    DIANA is an end-to-end computational model of speech processing, which takes as input the speech signal, and provides as output the orthographic transcription of the stimulus, a word/non-word judgment and the associated estimated reaction time. So far, the model has only been tested for Dutch. In this paper, we extend DIANA such that it can also process North American English. The model is tested by having it simulate human participants in a large scale North American English lexical decision experiment. The simulations show that DIANA can adequately approximate the reaction times of an average participant (r = 0.45). In addition, they indicate that DIANA does not yet adequately model the cognitive processes that take place after stimulus offset.
  • Ten Oever, S., Van Atteveldt, N., & Sack, A. T. (2015). Increased stimulus expectancy triggers low-frequency phase reset during restricted vigilance. Journal of Cognitive Neuroscience, 27(9), 1811-1822. doi:10.1162/jocn_a_00820.

    Abstract

    Temporal cues can be used to selectively attend to relevant information during abundant sensory stimulation. However, such cues differ vastly in the accuracy of their temporal estimates, ranging from very predictable to very unpredictable. When cues are strongly predictable, attention may facilitate selective processing by aligning relevant incoming information to high neuronal excitability phases of ongoing low-frequency oscillations. However, top-down effects on ongoing oscillations when temporal cues have some predictability, but also contain temporal uncertainties, are unknown. Here, we experimentally created such a situation of mixed predictability and uncertainty: A target could occur within a limited time window after cue but was always unpredictable in exact timing. Crucially to assess top-down effects in such a mixed situation, we manipulated target probability. High target likelihood, compared with low likelihood, enhanced delta oscillations more strongly as measured by evoked power and intertrial coherence. Moreover, delta phase modulated detection rates for probable targets. The delta frequency range corresponds with half-a-period to the target occurrence window and therefore suggests that low-frequency phase reset is engaged to produce a long window of high excitability when event timing is uncertain within a restricted temporal window.
  • Ten Bosch, L., & Boves, L. (2018). Information encoding by deep neural networks: what can we learn? In Proceedings of Interspeech 2018 (pp. 1457-1461). doi:10.21437/Interspeech.2018-1896.

    Abstract

    The recent advent of deep learning techniques in speech tech-nology and in particular in automatic speech recognition hasyielded substantial performance improvements. This suggeststhat deep neural networks (DNNs) are able to capture structurein speech data that older methods for acoustic modeling, suchas Gaussian Mixture Models and shallow neural networks failto uncover. In image recognition it is possible to link repre-sentations on the first couple of layers in DNNs to structuralproperties of images, and to representations on early layers inthe visual cortex. This raises the question whether it is possi-ble to accomplish a similar feat with representations on DNNlayers when processing speech input. In this paper we presentthree different experiments in which we attempt to untanglehow DNNs encode speech signals, and to relate these repre-sentations to phonetic knowledge, with the aim to advance con-ventional phonetic concepts and to choose the topology of aDNNs more efficiently. Two experiments investigate represen-tations formed by auto-encoders. A third experiment investi-gates representations on convolutional layers that treat speechspectrograms as if they were images. The results lay the basisfor future experiments with recursive networks.
  • Ten Oever, S., & Sack, A. T. (2015). Oscillatory phase shapes syllable perception. Proceedings of the National Academy of Sciences of the United States of America, 112(52), 15833-15837. doi:10.1073/pnas.1517519112.

    Abstract

    The role of oscillatory phase for perceptual and cognitive processes is being increasingly acknowledged. To date, little is known about the direct role of phase in categorical perception. Here we show in two separate experiments that the identification of ambiguous syllables that can either be perceived as / da/ or / ga/ is biased by the underlying oscillatory phase as measured with EEG and sensory entrainment to rhythmic stimuli. The measured phase difference in which perception is biased toward / da/ or / ga/ exactly matched the different temporal onset delays in natural audiovisual speech between mouth movements and speech sounds, which last 80 ms longer for / ga/ than for / da/. These results indicate the functional relationship between prestimulus phase and syllable identification, and signify that the origin of this phase relationship could lie in exposure and subsequent learning of unique audiovisual temporal onset differences.
  • Ten Oever, S., Schroeder, C. E., Poeppel, D., Van Atteveldt, N., & Zion-Golumbic, E. (2014). Rhythmicity and cross-modal temporal cues facilitate detection. Neuropsychologia, 63, 43-50. doi:10.1016/j.neuropsychologia.2014.08.008.

    Abstract

    Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we investigated the influence of two types of predictive temporal information on the perception of near-threshold auditory stimuli: 1) intrinsic temporal rhythmicity within an auditory stimulus stream and 2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality should decrease the threshold at which sounds are detected, beyond the advantage provided by each information source alone. Two experiments were conducted in which participants had to detect tones in noise. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual signal in half of the trials. We show that detection intensities are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, independent from response bias, and that this effect is even greater for rhythmic audiovisual presentation. These results suggest that both types of temporal information are used to optimally process sounds that occur at expected points in time (resulting in enhanced detection), and that multiple temporal cues are combined to improve temporal estimates. Our findings underscore the flexibility and proactivity of the perceptual system which uses within- and across-modality temporal cues to anticipate upcoming events and process them optimally. (C) 2014 Elsevier Ltd. All rights reserved.
  • Terband, H., Rodd, J., & Maas, E. (2015). Simulations of feedforward and feedback control in apraxia of speech (AOS): Effects of noise masking on vowel production in the DIVA model. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahan, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015).

    Abstract

    Apraxia of Speech (AOS) is a motor speech disorder whose precise nature is still poorly understood. A recent behavioural experiment featuring a noise masking paradigm suggests that AOS reflects a disruption of feedforward control, whereas feedback control is spared and plays a more prominent role in achieving and maintaining segmental contrasts [10]. In the present study, we set out to validate the interpretation of AOS as a feedforward impairment by means of a series of computational simulations with the DIVA model [6, 7] mimicking the behavioural experiment. Simulation results showed a larger reduction in vowel spacing and a smaller vowel dispersion in the masking condition compared to the no-masking condition for the simulated feedforward deficit, whereas the other groups showed an opposite pattern. These results mimic the patterns observed in the human data, corroborating the notion that AOS can be conceptualized as a deficit in feedforward control
  • Terwisscha van Scheltinga, A. F., Bakker, S. C., Van Haren, N. E., Boos, H. B., Schnack, H. G., Cahn, W., Hoogman, M., Zwiers, M. P., Fernandez, G., Franke, B., Hulshoff Pol, H. E., & Kahn, R. S. (2014). Association study of fibroblast growth factor genes and brain volumes in schizophrenic patients and healthy controls. Psychiatric Genetics, 24, 283-284. doi:10.1097/YPG.0000000000000057.
  • Theakston, A., Coates, A., & Holler, J. (2014). Handling agents and patients: Representational cospeech gestures help children comprehend complex syntactic constructions. Developmental Psychology, 50(7), 1973-1984. doi:10.1037/a0036694.

    Abstract

    Gesture is an important precursor of children’s early language development, for example, in the transition to multiword speech and as a predictor of later language abilities. However, it is unclear whether gestural input can influence children’s comprehension of complex grammatical constructions. In Study 1, 3- (M = 3 years 5 months) and 4-year-old (M = 4 years 6 months) children witnessed 2-participant actions described using the infrequent object-cleft-construction (OCC; It was the dog that the cat chased). Half saw an experimenter accompanying her descriptions with gestures representing the 2 participants and indicating the direction of action; the remaining children did not witness gesture. Children who witnessed gestures showed better comprehension of the OCC than those who did not witness gestures, both in and beyond the immediate physical context, but this benefit was restricted to the oldest 4-year-olds. In Study 2, a further group of older 4-year-old children (M = 4 years 7 months) witnessed the same 2-participant actions described by an experimenter and accompanied by gestures, but the gesture represented only the 2 participants and not the direction of the action. Again, a benefit of gesture was observed on subsequent comprehension of the OCC. We interpret these findings as demonstrating that representational cospeech gestures can help children comprehend complex linguistic structures by highlighting the roles played by the participants in the event.

    Files private

    Request files
  • Thielen, J.-W., Takashima, A., Rutters, F., Tendolkar, I., & Fernandez, G. (2015). Transient relay function of midline thalamic nuclei during long-term memory consolidation in humans. Learning & Memory, 22, 527-531. doi:10.1101/lm.038372.115.

    Abstract

    To test the hypothesis that thalamic midline nuclei play a transient role in memory consolidation, we reanalyzed a prospective functional MRI study, contrasting recent and progressively more remote memory retrieval. We revealed a transient thalamic connectivity increase with the hippocampus, the medial prefrontal cortex (mPFC), and a parahippocampal area, which decreased with time. In turn, mPFC-parahippocampal connectivity increased progressively. These findings support a model in which thalamic midline nuclei serve as a hub linking hippocampus, mPFC, and posterior representational areas during memory retrieval at an early (2 h) stage of consolidation, extending classical systems consolidation models by attributing a transient role to midline thalamic nuclei.
  • Thomassen, A., & Kempen, G. (1976). Geheugen. In J. A. Michon, E. Eijkman, & L. F. De Klerk (Eds.), Handboek der Psychonomie (pp. 354-387). Deventer: Van Loghum Slaterus.
  • Thompson, B., & Lupyan, G. (2018). Automatic estimation of lexical concreteness in 77 languages. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 1122-1127). Austin, TX: Cognitive Science Society.

    Abstract

    We estimate lexical Concreteness for millions of words across 77 languages. Using a simple regression framework, we combine vector-based models of lexical semantics with experimental norms of Concreteness in English and Dutch. By applying techniques to align vector-based semantics across distinct languages, we compute and release Concreteness estimates at scale in numerous languages for which experimental norms are not currently available. This paper lays out the technique and its efficacy. Although this is a difficult dataset to evaluate immediately, Concreteness estimates computed from English correlate with Dutch experimental norms at $\rho$ = .75 in the vocabulary at large, increasing to $\rho$ = .8 among Nouns. Our predictions also recapitulate attested relationships with word frequency. The approach we describe can be readily applied to numerous lexical measures beyond Concreteness
  • Thompson, B., Roberts, S., & Lupyan, G. (2018). Quantifying semantic similarity across languages. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 2551-2556). Austin, TX: Cognitive Science Society.

    Abstract

    Do all languages convey semantic knowledge in the same way? If language simply mirrors the structure of the world, the answer should be a qualified “yes”. If, however, languages impose structure as much as reflecting it, then even ostensibly the “same” word in different languages may mean quite different things. We provide a first pass at a large-scale quantification of cross-linguistic semantic alignment of approximately 1000 meanings in 55 languages. We find that the translation equivalents in some domains (e.g., Time, Quantity, and Kinship) exhibit high alignment across languages while the structure of other domains (e.g., Politics, Food, Emotions, and Animals) exhibits substantial cross-linguistic variability. Our measure of semantic alignment correlates with known phylogenetic distances between languages: more phylogenetically distant languages have less semantic alignment. We also find semantic alignment to correlate with cultural distances between societies speaking the languages, suggesting a rich co-adaptation of language and culture even in domains of experience that appear most constrained by the natural world
  • Thompson, P. M., Stein, J. L., Medland, S. E., Hibar, D. P., Vasquez, A. A., Renteria, M. E., Toro, R., Jahanshad, N., Schumann, G., Franke, B., Wright, M. J., Martin, N. G., Agartz, I., Alda, M., Alhusaini, S., Almasy, L., Almeida, J., Alpert, K., Andreasen, N. C., Andreassen, O. A. and 269 moreThompson, P. M., Stein, J. L., Medland, S. E., Hibar, D. P., Vasquez, A. A., Renteria, M. E., Toro, R., Jahanshad, N., Schumann, G., Franke, B., Wright, M. J., Martin, N. G., Agartz, I., Alda, M., Alhusaini, S., Almasy, L., Almeida, J., Alpert, K., Andreasen, N. C., Andreassen, O. A., Apostolova, L. G., Appel, K., Armstrong, N. J., Aribisala, B., Bastin, M. E., Bauer, M., Bearden, C. E., Bergmann, Ø., Binder, E. B., Blangero, J., Bockholt, H. J., Bøen, E., Bois, C., Boomsma, D. I., Booth, T., Bowman, I. J., Bralten, J., Brouwer, R. M., Brunner, H. G., Brohawn, D. G., Buckner, R. L., Buitelaar, J., Bulayeva, K., Bustillo, J. R., Calhoun, V. D., Cannon, D. M., Cantor, R. M., Carless, M. A., Caseras, X., Cavalleri, G. L., Chakravarty, M. M., Chang, K. D., Ching, C. R. K., Christoforou, A., Cichon, S., Clark, V. P., Conrod, P., Coppola, G., Crespo-Facorro, B., Curran, J. E., Czisch, M., Deary, I. J., de Geus, E. J. C., den Braber, A., Delvecchio, G., Depondt, C., de Haan, L., de Zubicaray, G. I., Dima, D., Dimitrova, R., Djurovic, S., Dong, H., Donohoe, G., Duggirala, R., Dyer, T. D., Ehrlich, S., Ekman, C. J., Elvsåshagen, T., Emsell, L., Erk, S., Espeseth, T., Fagerness, J., Fears, S., Fedko, I., Fernández, G., Fisher, S. E., Foroud, T., Fox, P. T., Francks, C., Frangou, S., Frey, E. M., Frodl, T., Frouin, V., Garavan, H., Giddaluru, S., Glahn, D. C., Godlewska, B., Goldstein, R. Z., Gollub, R. L., Grabe, H. J., Grimm, O., Gruber, O., Guadalupe, T., Gur, R. E., Gur, R. C., Göring, H. H. H., Hagenaars, S., Hajek, T., Hall, G. B., Hall, J., Hardy, J., Hartman, C. A., Hass, J., Hatton, S. N., Haukvik, U. K., Hegenscheid, K., Heinz, A., Hickie, I. B., Ho, B.-C., Hoehn, D., Hoekstra, P. J., Hollinshead, M., Holmes, A. J., Homuth, G., Hoogman, M., Hong, L. E., Hosten, N., Hottenga, J.-J., Pol, H. E. H., Hwang, K. S., Jr, C. R. J., Jenkinson, M., Johnston, C., Jönsson, E. G., Kahn, R. S., Kasperaviciute, D., Kelly, S., Kim, S., Kochunov, P., Koenders, L., Krämer, B., Kwok, J. B. J., Lagopoulos, J., Laje, G., Landen, M., Landman, B. A., Lauriello, J., Lawrie, S. M., Lee, P. H., Le Hellard, S., Lemaître, H., Leonardo, C. D., Li, C.-s., Liberg, B., Liewald, D. C., Liu, X., Lopez, L. M., Loth, E., Lourdusamy, A., Luciano, M., Macciardi, F., Machielsen, M. W. J., MacQueen, G. M., Malt, U. F., Mandl, R., Manoach, D. S., Martinot, J.-L., Matarin, M., Mather, K. A., Mattheisen, M., Mattingsdal, M., Meyer-Lindenberg, A., McDonald, C., McIntosh, A. M., McMahon, F. J., McMahon, K. L., Meisenzahl, E., Melle, I., Milaneschi, Y., Mohnke, S., Montgomery, G. W., Morris, D. W., Moses, E. K., Mueller, B. A., Maniega, S. M., Mühleisen, T. W., Müller-Myhsok, B., Mwangi, B., Nauck, M., Nho, K., Nichols, T. E., Nilsson, L.-G., Nugent, A. C., Nyberg, L., Olvera, R. L., Oosterlaan, J., Ophoff, R. A., Pandolfo, M., Papalampropoulou-Tsiridou, M., Papmeyer, M., Paus, T., Pausova, Z., Pearlson, G. D., Penninx, B. W., Peterson, C. P., Pfennig, A., Phillips, M., Pike, G. B., Poline, J.-B., Potkin, S. G., Pütz, B., Ramasamy, A., Rasmussen, J., Rietschel, M., Rijpkema, M., Risacher, S. L., Roffman, J. L., Roiz-Santiañez, R., Romanczuk-Seiferth, N., Rose, E. J., Royle, N. A., Rujescu, D., Ryten, M., Sachdev, P. S., Salami, A., Satterthwaite, T. D., Savitz, J., Saykin, A. J., Scanlon, C., Schmaal, L., Schnack, H. G., Schork, A. J., Schulz, S. C., Schür, R., Seidman, L., Shen, L., Shoemaker, J. M., Simmons, A., Sisodiya, S. M., Smith, C., Smoller, J. W., Soares, J. C., Sponheim, S. R., Sprooten, E., Starr, J. M., Steen, V. M., Strakowski, S., Strike, L., Sussmann, J., Sämann, P. G., Teumer, A., Toga, A. W., Tordesillas-Gutierrez, D., Trabzuni, D., Trost, S., Turner, J., Van den Heuvel, M., van der Wee, N. J., van Eijk, K., van Erp, T. G. M., van Haren, N. E. M., van Ent, D. ‘., van Tol, M.-J., Hernández, M. C. V., Veltman, D. J., Versace, A., Völzke, H., Walker, R., Walter, H., Wang, L., Wardlaw, J. M., Weale, M. E., Weiner, M. W., Wen, W., Westlye, L. T., Whalley, H. C., Whelan, C. D., White, T., Winkler, A. M., Wittfeld, K., Woldehawariat, G., Wolf, C., Zilles, D., Zwiers, M. P., Thalamuthu, A., Schofield, P. R., Freimer, N. B., Lawrence, N. S., & Drevets, W. (2014). The ENIGMA Consortium: Large-scale collaborative analyses of neuroimaging and genetic data. Brain Imaging and Behavior, 8(2), 153-182. doi:10.1007/s11682-013-9269-5.

    Abstract

    The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience, genetics, and medicine, ENIGMA studies have analyzed neuroimaging data from over 12,826 subjects. In addition, data from 12,171 individuals were provided by the CHARGE consortium for replication of findings, in a total of 24,997 subjects. By meta-analyzing results from many sites, ENIGMA has detected factors that affect the brain that no individual site could detect on its own, and that require larger numbers of subjects than any individual neuroimaging study has currently collected. ENIGMA’s first project was a genome-wide association study identifying common variants in the genome associated with hippocampal volume or intracranial volume. Continuing work is exploring genetic associations with subcortical volumes (ENIGMA2) and white matter microstructure (ENIGMA-DTI). Working groups also focus on understanding how schizophrenia, bipolar illness, major depression and attention deficit/hyperactivity disorder (ADHD) affect the brain. We review the current progress of the ENIGMA Consortium, along with challenges and unexpected discoveries made on the way
  • Thorgrimsson, G., Fawcett, C., & Liszkowski, U. (2015). 1- and 2-year-olds’ expectations about third-party communicative actions. Infant Behavior and Development, 39, 53-66. doi:10.1016/j.infbeh.2015.02.002.

    Abstract

    Infants expect people to direct actions toward objects, and they respond to actions directed to themselves, but do they have expectations about actions directed to third parties? In two experiments, we used eye tracking to investigate 1- and 2-year-olds’ expectations about communicative actions addressed to a third party. Experiment 1 presented infants with videos where an adult (the Emitter) either uttered a sentence or produced non-speech sounds. The Emitter was either face-to-face with another adult (the Recipient) or the two were back-to-back. The Recipient did not respond to any of the sounds. We found that 2-, but not 1-year-olds looked quicker and longer at the Recipient following speech than non-speech, suggesting that they expected her to respond to speech. These effects were specific to the face-to-face context. Experiment 2 presented 1-year-olds with similar face-to-face exchanges but modified to engage infants and minimize task demands. The infants looked quicker to the Recipient following speech than non-speech, suggesting that they expected a response to speech. The study suggests that by 1 year of age infants expect communicative actions to be directed at a third-party listener.
  • Thorgrimsson, G. (2014). Infants' understanding of communication as participants and observers. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Thorgrimsson, G., Fawcett, C., & Liszkowski, U. (2014). Infants’ expectations about gestures and actions in third-party interactions. Frontiers in Psychology, 5: 321. doi:10.3389/fpsyg.2014.00321.

    Abstract

    We investigated 14-month-old infants’ expectations toward a third party addressee of communicative gestures and an instrumental action. Infants’ eye movements were tracked as they observed a person (the Gesturer) point, direct a palm-up request gesture, or reach toward an object, and another person (the Addressee) respond by grasping it. Infants’ looking patterns indicate that when the Gesturer pointed or used the palm-up request, infants anticipated that the Addressee would give the object to the Gesturer, suggesting that they ascribed a motive of request to the gestures. In contrast, when the Gesturer reached for the object, and in a control condition where no action took place, the infants did not anticipate the Addressee’s response. The results demonstrate that infants’ recognition of communicative gestures extends to others’ interactions, and that infants can anticipate how third-party addressees will respond to others’ gestures.
  • Thorin, J., Sadakata, M., Desain, P., & McQueen, J. M. (2018). Perception and production in interaction during non-native speech category learning. The Journal of the Acoustical Society of America, 144(1), 92-103. doi:10.1121/1.5044415.

    Abstract

    Establishing non-native phoneme categories can be a notoriously difficult endeavour—in both speech perception and speech production. This study asks how these two domains interact in the course of this learning process. It investigates the effect of perceptual learning and related production practice of a challenging non-native category on the perception and/or production of that category. A four-day perceptual training protocol on the British English /æ/-/ɛ/ vowel contrast was combined with either related or unrelated production practice. After feedback on perceptual categorisation of the contrast, native Dutch participants in the related production group (N = 19) pronounced the trial's correct answer, while participants in the unrelated production group (N = 19) pronounced similar but phonologically unrelated words. Comparison of pre- and post-tests showed significant improvement over the course of training in both perception and production, but no differences between the groups were found. The lack of an effect of production practice is discussed in the light of previous, competing results and models of second-language speech perception and production. This study confirms that, even in the context of related production practice, perceptual training boosts production learning.
  • Tian, X., Ding, N., Teng, X., Bai, F., & Poeppel, D. (2018). Imagined speech influences perceived loudness of sound. Nature Human Behaviour, 2, 225-234. doi:10.1038/s41562-018-0305-8.

    Abstract

    The way top-down and bottom-up processes interact to shape our perception and behaviour is a fundamental question and remains highly controversial. How early in a processing stream do such interactions occur, and what factors govern such interactions? The degree of abstractness of a perceptual attribute (for example, orientation versus shape in vision, or loudness versus sound identity in hearing) may determine the locus of neural processing and interaction between bottom-up and internal information. Using an imagery-perception repetition paradigm, we find that imagined speech affects subsequent auditory perception, even for a low-level attribute such as loudness. This effect is observed in early auditory responses in magnetoencephalography and electroencephalography that correlate with behavioural loudness ratings. The results suggest that the internal reconstruction of neural representations without external stimulation is flexibly regulated by task demands, and that such top-down processes can interact with bottom-up information at an early perceptual stage to modulate perception.
  • Tilot, A. K., Frazier, T. W. 2., & Eng, C. (2015). Balancing proliferation and connectivity in PTEN -associated Autism Spectrum Disorder. Neurotherapeutics, 13(3), 609-619. doi:10.1007/s13311-015-0356-8.

    Abstract

    Germline mutations in PTEN, which encodes a widely expressed phosphatase, was mapped to 10q23 and identified as the susceptibility gene for Cowden syndrome, characterized by macrocephaly and high risks of breast, thyroid, and other cancers. The phenotypic spectrum of PTEN mutations expanded to include autism with macrocephaly only 10 years ago. Neurological studies of patients with PTEN-associated autism spectrum disorder (ASD) show increases in cortical white matter and a distinctive cognitive profile, including delayed language development with poor working memory and processing speed. Once a germline PTEN mutation is found, and a diagnosis of phosphatase and tensin homolog (PTEN) hamartoma tumor syndrome made, the clinical outlook broadens to include higher lifetime risks for multiple cancers, beginning in childhood with thyroid cancer. First described as a tumor suppressor, PTEN is a major negative regulator of the phosphatidylinositol 3-kinase/protein kinase B/mammalian target of rapamycin (mTOR) signaling pathway—controlling growth, protein synthesis, and proliferation. This canonical function combines with less well-understood mechanisms to influence synaptic plasticity and neuronal cytoarchitecture. Several excellent mouse models of Pten loss or dysfunction link these neural functions to autism-like behavioral abnormalities, such as altered sociability, repetitive behaviors, and phenotypes like anxiety that are often associated with ASD in humans. These models also show the promise of mTOR inhibitors as therapeutic agents capable of reversing phenotypes ranging from overgrowth to low social behavior. Based on these findings, therapeutic options for patients with PTEN hamartoma tumor syndrome and ASD are coming into view, even as new discoveries in PTEN biology add complexity to our understanding of this master regulator.

    Additional information

    13311_2015_356_MOESM1_ESM.pdf

Share this page