Publications

Displaying 1401 - 1500 of 1742
  • Shayan, S., Ozturk, O., Bowerman, M., & Majid, A. (2014). Spatial metaphor in language can promote the development of cross-modal mappings in children. Developmental Science, 17(4), 636-643. doi:10.1111/desc.12157.

    Abstract

    Pitch is often described metaphorically: for example, Farsi and Turkish speakers use a ‘thickness’ metaphor (low sounds are ‘thick’ and high sounds are ‘thin’), while German and English speakers use a height metaphor (‘low’, ‘high’). This study examines how child and adult speakers of Farsi, Turkish, and German map pitch and thickness using a cross-modal association task. All groups, except for German children, performed significantly better than chance. German-speaking adults’ success suggests the pitch-to-thickness association can be learned by experience. But the fact that German children were at chance indicates that this learning takes time. Intriguingly, Farsi and Turkish children's performance suggests that learning cross-modal associations can be boosted through experience with consistent metaphorical mappings in the input language
  • Shen, C., & Janse, E. (2019). Articulatory control in speech production. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 2533-2537). Canberra, Australia: Australasian Speech Science and Technology Association Inc.
  • Shen, C., Cooke, M., & Janse, E. (2019). Individual articulatory control in speech enrichment. In M. Ochmann, M. Vorländer, & J. Fels (Eds.), Proceedings of the 23rd International Congress on Acoustics (pp. 5726-5730). Berlin: Deutsche Gesellschaft für Akustik.

    Abstract

    ndividual talkers may use various strategies to enrich their speech while speaking in noise (i.e., Lombard speech) to improve their intelligibility. The resulting acoustic-phonetic changes in Lombard speech vary amongst different speakers, but it is unclear what causes these talker differences, and what impact these differences have on intelligibility. This study investigates the potential role of articulatory control in talkers’ Lombard speech enrichment success. Seventy-eight speakers read out sentences in both their habitual style and in a condition where they were instructed to speak clearly while hearing loud speech-shaped noise. A diadochokinetic (DDK) speech task that requires speakers to repetitively produce word or non-word sequences as accurately and as rapidly as possible, was used to quantify their articulatory control. Individuals’ predicted intelligibility in both speaking styles (presented at -5 dB SNR) was measured using an acoustic glimpse-based metric: the High-Energy Glimpse Proportion (HEGP). Speakers’ HEGP scores show a clear effect of speaking condition (better HEGP scores in the Lombard than habitual condition), but no simple effect of articulatory control on HEGP, nor an interaction between speaking condition and articulatory control. This indicates that individuals’ speech enrichment success as measured by the HEGP metric was not predicted by DDK performance.
  • Shkaravska, O., Van Eekelen, M., & Tamalet, A. (2014). Collected size semantics for strict functional programs over general polymorphic lists. In U. Dal Lago, & R. Pena (Eds.), Foundational and Practical Aspects of Resource Analysis: Third International Workshop, FOPARA 2013, Bertinoro, Italy, August 29-31, 2013, Revised Selected Papers (pp. 143-159). Berlin: Springer.

    Abstract

    Size analysis can be an important part of heap consumption analysis. This paper is a part of ongoing work about typing support for checking output-on-input size dependencies for function definitions in a strict functional language. A significant restriction for our earlier results is that inner data structures (e.g. in a list of lists) all must have the same size. Here, we make a big step forwards by overcoming this limitation via the introduction of higher-order size annotations such that variate sizes of inner data structures can be expressed. In this way the analysis becomes applicable for general, polymorphic nested lists.
  • Shkaravska, O., & Van Eekelen, M. (2014). Univariate polynomial solutions of algebraic difference equations. Journal of Symbolic Computation, 60, 15-28. doi:10.1016/j.jsc.2013.10.010.

    Abstract

    Contrary to linear difference equations, there is no general theory of difference equations of the form G(P(x−τ1),…,P(x−τs))+G0(x)=0, with τi∈K, G(x1,…,xs)∈K[x1,…,xs] of total degree D⩾2 and G0(x)∈K[x], where K is a field of characteristic zero. This article is concerned with the following problem: given τi, G and G0, find an upper bound on the degree d of a polynomial solution P(x), if it exists. In the presented approach the problem is reduced to constructing a univariate polynomial for which d is a root. The authors formulate a sufficient condition under which such a polynomial exists. Using this condition, they give an effective bound on d, for instance, for all difference equations of the form G(P(x−a),P(x−a−1),P(x−a−2))+G0(x)=0 with quadratic G, and all difference equations of the form G(P(x),P(x−τ))+G0(x)=0 with G having an arbitrary degree.
  • Sicoli, M. A., Stivers, T., Enfield, N. J., & Levinson, S. C. (2015). Marked initial pitch in questions signals marked communicative function. Language and Speech, 58(2), 204-223. doi:10.1177/0023830914529247.

    Abstract

    In conversation, the initial pitch of an utterance can provide an early phonetic cue of the communicative function, the speech act, or the social action being implemented. We conducted quantitative acoustic measurements and statistical analyses of pitch in over 10,000 utterances, including 2512 questions, their responses, and about 5000 other utterances by 180 total speakers from a corpus of 70 natural conversations in 10 languages. We measured pitch at first prominence in a speaker’s utterance and discriminated utterances by language, speaker, gender, question form, and what social action is achieved by the speaker’s turn. Through applying multivariate logistic regression we found that initial pitch that significantly deviated from the speaker’s median pitch level was predictive of the social action of the question. In questions designed to solicit agreement with an evaluation rather than information, pitch was divergent from a speaker’s median predictably in the top 10% of a speakers range. This latter finding reveals a kind of iconicity in the relationship between prosody and social action in which a marked pitch correlates with a marked social action. Thus, we argue that speakers rely on pitch to provide an early signal for recipients that the question is not to be interpreted through its literal semantics but rather through an inference.
  • Sidnell, J., & Enfield, N. J. (2014). Deixis and the interactional foundations of reference. In Y. Huang (Ed.), The Oxford handbook of pragmatics.
  • Sidnell, J., Kockelman, P., & Enfield, N. J. (2014). Community and social life. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 481-483). Cambridge: Cambridge University Press.
  • Sidnell, J., Enfield, N. J., & Kockelman, P. (2014). Interaction and intersubjectivity. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 343-345). Cambridge: Cambridge University Press.
  • Sidnell, J., & Enfield, N. J. (2014). The ontology of action, in interaction. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 423-446). Cambridge: Cambridge University Press.
  • Silva, S., Branco, P., Barbosa, F., Marques-Teixeira, J., Petersson, K. M., & Castro, S. L. (2014). Musical phrase boundaries, wrap-up and the closure positive shift. Brain Research, 1585, 99-107. doi:10.1016/j.brainres.2014.08.025.

    Abstract

    We investigated global integration (wrap-up) processes at the boundaries of musical phrases by comparing the effects of well and non-well formed phrases on event-related potentials time-locked to two boundary points: the onset and the offset of the boundary pause. The Closure Positive Shift, which is elicited at the boundary offset, was not modulated by the quality of phrase structure (well vs. non-well formed). In contrast, the boundary onset potentials showed different patterns for well and non-well formed phrases. Our results contribute to specify the functional meaning of the Closure Positive Shift in music, shed light on the large-scale structural integration of musical input, and raise new hypotheses concerning shared resources between music and language.
  • Silva, S., Barbosa, F., Marques-Teixeira, J., Petersson, K. M., & Castro, S. L. (2014). You know when: Event-related potentials and theta/beat power indicate boundary prediction in music. Journal of Integrative Neuroscience, 13(1), 19-34. doi:10.1142/S0219635214500022.

    Abstract

    Neuroscientific and musicological approaches to music cognition indicate that listeners familiarized in the Western tonal tradition expect a musical phrase boundary at predictable time intervals. However, phrase boundary prediction processes in music remain untested. We analyzed event-related potentials (ERPs) and event-related induced power changes at the onset and offset of a boundary pause. We made comparisons with modified melodies, where the pause was omitted and filled by tones. The offset of the pause elicited a closure positive shift (CPS), indexing phrase boundary detection. The onset of the filling tones elicited significant increases in theta and beta powers. In addition, the P2 component was larger when the filling tones started than when they ended. The responses to boundary omission suggest that listeners expected to hear a boundary pause. Therefore, boundary prediction seems to coexist with boundary detection in music segmentation.
  • Simanova, I. (2014). In search of conceptual representations in the brain: Towards mind-reading. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Simanova, I., Hagoort, P., Oostenveld, R., & Van Gerven, M. A. J. (2014). Modality-independent decoding of semantic information from the human brain. Cerebral Cortex, 24, 426-434. doi:10.1093/cercor/bhs324.

    Abstract

    An ability to decode semantic information from fMRI spatial patterns has been demonstrated in previous studies mostly for 1 specific input modality. In this study, we aimed to decode semantic category independent of the modality in which an object was presented. Using a searchlight method, we were able to predict the stimulus category from the data while participants performed a semantic categorization task with 4 stimulus modalities (spoken and written names, photographs, and natural sounds). Significant classification performance was achieved in all 4 modalities. Modality-independent decoding was implemented by training and testing the searchlight method across modalities. This allowed the localization of those brain regions, which correctly discriminated between the categories, independent of stimulus modality. The analysis revealed large clusters of voxels in the left inferior temporal cortex and in frontal regions. These voxels also allowed category discrimination in a free recall session where subjects recalled the objects in the absence of external stimuli. The results show that semantic information can be decoded from the fMRI signal independently of the input modality and have clear implications for understanding the functional mechanisms of semantic memory.
  • Simanova, I., Van Gerven, M. A., Oostenveld, R., & Hagoort, P. (2015). Predicting the semantic category of internally generated words from neuromagnetic recordings. Journal of Cognitive Neuroscience, 27(1), 35-45. doi:10.1162/jocn_a_00690.

    Abstract

    In this study, we explore the possibility to predict the semantic category of words from brain signals in a free word generation task. Participants produced single words from different semantic categories in a modified semantic fluency task. A Bayesian logistic regression classifier was trained to predict the semantic category of words from single-trial MEG data. Significant classification accuracies were achieved using sensor-level MEG time series at the time interval of conceptual preparation. Semantic category prediction was also possible using source-reconstructed time series, based on minimum norm estimates of cortical activity. Brain regions that contributed most to classification on the source level were identified. These were the left inferior frontal gyrus, left middle frontal gyrus, and left posterior middle temporal gyrus. Additionally, the temporal dynamics of brain activity underlying the semantic preparation during word generation was explored. These results provide important insights about central aspects of language production
  • Simon, E., & Sjerps, M. J. (2014). Developing non-native vowel representations: a study on child second language acquisition. COPAL: Concordia Working Papers in Applied Linguistics, 5, 693-708.

    Abstract

    This study examines what stage 9‐12‐year‐old Dutch‐speaking children have reached in the development of their L2 lexicon, focusing on its phonological specificity. Two experiments were carried out with a group of Dutch‐speaking children and adults learning English. In a first task, listeners were asked to judge Dutch words which were presented with either the target Dutch vowel or with an English vowel synthetically inserted. The second experiment was a mirror of the first, i.e. with English words and English or Dutch vowels inserted. It was examined to what extent the listeners accepted substitutions of Dutch vowels by English ones, and vice versa. The results of the experiments suggest that the children have not reached the same degree of phonological specificity of L2 words as the adults. Children not only experience a strong influence of their native vowel categories when listening to L2 words, they also apply less strict criteria.
  • Simon, E., Sjerps, M. J., & Fikkert, P. (2014). Phonological representations in children’s native and non-native lexicon. Bilingualism: Language and Cognition, 17(1), 3-21. doi:10.1017/S1366728912000764.

    Abstract

    This study investigated the phonological representations of vowels in children's native and non-native lexicons. Two experiments were mispronunciation tasks (i.e., a vowel in words was substituted by another vowel from the same language). These were carried out by Dutch-speaking 9–12-year-old children and Dutch-speaking adults, in their native (Experiment 1, Dutch) and non-native (Experiment 2, English) language. A third experiment tested vowel discrimination. In Dutch, both children and adults could accurately detect mispronunciations. In English, adults, and especially children, detected substitutions of native vowels (i.e., vowels that are present in the Dutch inventory) by non-native vowels more easily than changes in the opposite direction. Experiment 3 revealed that children could accurately discriminate most of the vowels. The results indicate that children's L1 categories strongly influenced their perception of English words. However, the data also reveal a hint of the development of L2 phoneme categories.

    Additional information

    Simon_SuppMaterial.pdf
  • Simpson, N. H., Ceroni, F., Reader, R. H., Covill, L. E., Knight, J. C., the SLI Consortium, Hennessy, E. R., Bolton, P. F., Conti-Ramsden, G., O’Hare, A., Baird, G., Fisher, S. E., & Newbury, D. F. (2015). Genome-wide analysis identifies a role for common copy number variants in specific language impairment. European Journal of Human Genetics, 23, 1370-1377. doi:10.1038/ejhg.2014.296.

    Abstract

    An exploratory genome-wide copy number variant (CNV) study was performed in 127 independent cases with specific language impairment (SLI), their first-degree relatives (385 individuals) and 269 population controls. Language-impaired cases showed an increased CNV burden in terms of the average number of events (11.28 vs 10.01, empirical P=0.003), the total length of CNVs (717 vs 513 Kb, empirical P=0.0001), the average CNV size (63.75 vs 51.6 Kb, empirical P=0.0005) and the number of genes spanned (14.29 vs 10.34, empirical P=0.0007) when compared with population controls, suggesting that CNVs may contribute to SLI risk. A similar trend was observed in first-degree relatives regardless of affection status. The increased burden found in our study was not driven by large or de novo events, which have been described as causative in other neurodevelopmental disorders. Nevertheless, de novo CNVs might be important on a case-by-case basis, as indicated by identification of events affecting relevant genes, such as ACTR2 and CSNK1A1, and small events within known micro-deletion/-duplication syndrome regions, such as chr8p23.1. Pathway analysis of the genes present within the CNVs of the independent cases identified significant overrepresentation of acetylcholine binding, cyclic-nucleotide phosphodiesterase activity and MHC proteins as compared with controls. Taken together, our data suggest that the majority of the risk conferred by CNVs in SLI is via common, inherited events within a ‘common disorder–common variant’ model. Therefore the risk conferred by CNVs will depend upon the combination of events inherited (both CNVs and SNPs), the genetic background of the individual and the environmental factors.

    Additional information

    ejhg2014296x1.pdf ejhg2014296x2.pdf
  • Simpson, N. H., Addis, L., Brandler, W. M., Slonims, V., Clark, A., Watson, J., Scerri, T. S., Hennessy, E. R., Stein, J., Talcott, J., Conti-Ramsden, G., O'Hare, A., Baird, G., Fairfax, B. P., Knight, J. C., Paracchini, S., Fisher, S. E., Newbury, D. F., & The SLI Consortium (2014). Increased prevalence of sex chromosome aneuploidies in specific language impairment and dyslexia. Developmental Medicine and Child Neurology, 56, 346-353. doi:10.1111/dmcn.12294.

    Abstract

    Aim Sex chromosome aneuploidies increase the risk of spoken or written language disorders but individuals with specific language impairment (SLI) or dyslexia do not routinely undergo cytogenetic analysis. We assess the frequency of sex chromosome aneuploidies in individuals with language impairment or dyslexia. Method Genome-wide single nucleotide polymorphism genotyping was performed in three sample sets: a clinical cohort of individuals with speech and language deficits (87 probands: 61 males, 26 females; age range 4 to 23 years), a replication cohort of individuals with SLI, from both clinical and epidemiological samples (209 probands: 139 males, 70 females; age range 4 to 17 years), and a set of individuals with dyslexia (314 probands: 224 males, 90 females; age range 7 to 18 years). Results In the clinical language-impaired cohort, three abnormal karyotypic results were identified in probands (proband yield 3.4%). In the SLI replication cohort, six abnormalities were identified providing a consistent proband yield (2.9%). In the sample of individuals with dyslexia, two sex chromosome aneuploidies were found giving a lower proband yield of 0.6%. In total, two XYY, four XXY (Klinefelter syndrome), three XXX, one XO (Turner syndrome), and one unresolved karyotype were identified. Interpretation The frequency of sex chromosome aneuploidies within each of the three cohorts was increased over the expected population frequency (approximately 0.25%) suggesting that genetic testing may prove worthwhile for individuals with language and literacy problems and normal non-verbal IQ. Early detection of these aneuploidies can provide information and direct the appropriate management for individuals.
  • Sjerps, M. J., & Reinisch, E. (2015). Divide and conquer: How perceptual contrast sensitivity and perceptual learning cooperate in reducing input variation in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 41(3), 710-722. doi:10.1037/a0039028.

    Abstract

    Listeners have to overcome variability of the speech signal that can arise, for example, because of differences in room acoustics, differences in speakers’ vocal tract properties, or idiosyncrasies in pronunciation. Two mechanisms that are involved in resolving such variation are perceptually contrastive effects that arise from surrounding acoustic context and lexically guided perceptual learning. Although both processes have been studied in great detail, little attention has been paid to how they operate relative to each other in speech perception. The present study set out to address this issue. The carrier parts of exposure stimuli of a classical perceptual learning experiment were spectrally filtered such that the acoustically ambiguous final fricatives sounded relatively more like the lexically intended sound (Experiment 1) or the alternative (Experiment 2). Perceptual learning was found only in the latter case. The findings show that perceptual contrast effects precede lexically guided perceptual learning, at least in terms of temporal order, and potentially in terms of cognitive processing levels as well
  • Sjerps, M. J., & Chang, E. F. (2019). The cortical processing of speech sounds in the temporal lobe. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 361-379). Cambridge, MA: MIT Press.
  • Sjerps, M. J., Fox, N. P., Johnson, K., & Chang, E. F. (2019). Speaker-normalized sound representations in the human auditory cortex. Nature Communications, 10: 2465. doi:10.1038/s41467-019-10365-z.

    Abstract

    The acoustic dimensions that distinguish speech sounds (like the vowel differences in “boot” and “boat”) also differentiate speakers’ voices. Therefore, listeners must normalize across speakers without losing linguistic information. Past behavioral work suggests an important role for auditory contrast enhancement in normalization: preceding context affects listeners’ perception of subsequent speech sounds. Here, using intracranial electrocorticography in humans, we investigate whether and how such context effects arise in auditory cortex. Participants identified speech sounds that were preceded by phrases from two different speakers whose voices differed along the same acoustic dimension as target words (the lowest resonance of the vocal tract). In every participant, target vowels evoke a speaker-dependent neural response that is consistent with the listener’s perception, and which follows from a contrast enhancement model. Auditory cortex processing thus displays a critical feature of normalization, allowing listeners to extract meaningful content from the voices of diverse speakers.

    Additional information

    41467_2019_10365_MOESM1_ESM.pdf
  • Sjerps, M. J., & Meyer, A. S. (2015). Variation in dual-task performance reveals late initiation of speech planning in turn-taking. Cognition, 136, 304-324. doi:10.1016/j.cognition.2014.10.008.

    Abstract

    The smooth transitions between turns in natural conversation suggest that speakers often begin to plan their utterances while listening to their interlocutor. The presented study investigates whether this is indeed the case and, if so, when utterance planning begins. Two hypotheses were contrasted: that speakers begin to plan their turn as soon as possible (in our experiments less than a second after the onset of the interlocutor’s turn), or that they do so close to the end of the interlocutor’s turn. Turn-taking was combined with a finger tapping task to measure variations in cognitive load. We assumed that the onset of speech planning in addition to listening would be accompanied by deterioration in tapping performance. Two picture description experiments were conducted. In both experiments there were three conditions: (1) Tapping and Speaking, where participants tapped a complex pattern while taking over turns from a pre-recorded speaker, (2) Tapping and Listening, where participants carried out the tapping task while overhearing two pre-recorded speakers, and (3) Speaking Only, where participants took over turns as in the Tapping and Speaking condition but without tapping. The experiments differed in the amount of tapping training the participants received at the beginning of the session. In Experiment 2, the participants’ eye-movements were recorded in addition to their speech and tapping. Analyses of the participants’ tapping performance and eye movements showed that they initiated the cognitively demanding aspects of speech planning only shortly before the end of the turn of the preceding speaker. We argue that this is a smart planning strategy, which may be the speakers’ default in many everyday situations.
  • Skiba, R. (2004). Revitalisierung bedrohter Sprachen - Ein Ernstfall für die Sprachdidaktik. In H. W. Hess (Ed.), Didaktische Reflexionen "Berliner Didaktik" und Deutsch als Fremdsprache heute (pp. 251-262). Berlin: Staufenburg.
  • Skiba, R. (1988). Computer analysis of language data using the data transformation program TEXTWOLF in conjunction with a database system. In U. Jung (Ed.), Computers in applied linguistics and language teaching (pp. 155-159). Frankfurt am Main: Peter Lang.
  • Skiba, R. (1988). Computerunterstützte Analyse von sprachlichen Daten mit Hilfe des Datenumwandlungsprogramms TextWolf in Kombination mit einem Datenbanksystem. In B. Spillner (Ed.), Angewandte Linguistik und Computer (pp. 86-88). Tübingen: Gunter Narr.
  • Skiba, R., Wittenburg, F., & Trilsbeek, P. (2004). New DoBeS web site: Contents & functions. Language Archive Newsletter, 1(2), 4-4.
  • Skiba, R. (1998). Fachsprachenforschung in wissenschaftstheoretischer Perspektive. Tübingen: Gunter Narr.
  • Sleegers, K., Bettens, K., De Roeck, A., Van Cauwenberghe, C., Cuyvers, E., Verheijen, J., Struyfs, H., Van Dongen, J., Vermeulen, S., Engelborghs, S., Vandenbulcke, M., Vandenberghe, R., De Deyn, P., Van Broeckhoven, C., & BELNEU consortium (2015). A 22-single nucleotide polymorphism Alzheimer's disease risk score correlates with family history, onset age, and cerebrospinal fluid Aβ42. Alzheimer's & Dementia, 11(12), 1452-1460. doi:10.1016/j.jalz.2015.02.013.

    Abstract

    Introduction The ability to identify individuals at increased genetic risk for Alzheimer's disease (AD) may streamline biomarker and drug trials and aid clinical and personal decision making. Methods We evaluated the discriminative ability of a genetic risk score (GRS) covering 22 published genetic risk loci for AD in 1162 Flanders-Belgian AD patients and 1019 controls and assessed correlations with family history, onset age, and cerebrospinal fluid (CSF) biomarkers (Aβ1-42, T-Tau, P-Tau181P). Results A GRS including all single nucleotide polymorphisms (SNPs) and age-specific APOE ε4 weights reached area under the curve (AUC) 0.70, which increased to AUC 0.78 for patients with familial predisposition. Risk of AD increased with GRS (odds ratio, 2.32 (95% confidence interval 2.08-2.58 per unit; P < 1.0e-15). Onset age and CSF Aβ1-42 decreased with increasing GRS (Ponset-age = 9.0e-11; PAβ = 8.9e-7). Discussion The discriminative ability of this 22-SNP GRS is still limited, but these data illustrate that incorporation of age-specific weights improves discriminative ability. GRS-phenotype correlations highlight the feasibility of identifying individuals at highest susceptibility. © 2015 The Authors.
  • Slobin, D. I., Ibarretxe-Antuñano, I., Kopecka, A., & Majid, A. (2014). Manners of human gait: A crosslinguistic event-naming study. Cognitive Linguistics, 25, 701-741. doi:10.1515/cog-2014-0061.

    Abstract

    Crosslinguistic studies of expressions of motion events have found that Talmy's binary typology of verb-framed and satellite-framed languages is reflected in language use. In particular, Manner of motion is relatively more elaborated in satellite-framed languages (e.g., in narrative, picture description, conversation, translation). The present research builds on previous controlled studies of the domain of human motion by eliciting descriptions of a wide range of manners of walking and running filmed in natural circumstances. Descriptions were elicited from speakers of two satellite-framed languages (English, Polish) and three verb-framed languages (French, Spanish, Basque). The sampling of events in this study resulted in four major semantic clusters for these five languages: walking, running, non-canonical gaits (divided into bounce-and-recoil and syncopated movements), and quadrupedal movement (crawling). Counts of verb types found a broad tendency for satellite-framed languages to show greater lexical diversity, along with substantial within group variation. Going beyond most earlier studies, we also examined extended descriptions of manner of movement, isolating types of manner. The following categories of manner were identified and compared: attitude of actor, rate, effort, posture, and motor patterns of legs and feet. Satellite-framed speakers tended to elaborate expressive manner verbs, whereas verb-framed speakers used modification to add manner to neutral motion verbs
  • Sloetjes, H. (2014). ELAN: Multimedia annotation application. In J. Durand, U. Gut, & G. Kristoffersen (Eds.), The Oxford Handbook of Corpus Phonology (pp. 305-320). Oxford: Oxford University Press.
  • Slonimska, A., Ozyurek, A., & Campisi, E. (2015). Ostensive signals: markers of communicative relevance of gesture during demonstration to adults and children. In G. Ferré, & M. Tutton (Eds.), Proceedings of the 4th GESPIN - Gesture & Speech in Interaction Conference (pp. 217-222). Nantes: Universite of Nantes.

    Abstract

    Speakers adapt their speech and gestures in various ways for their audience. We investigated further whether they use
    ostensive signals (eye gaze, ostensive speech (e.g. like this, this) or a combination of both) in relation to their gestures
    when talking to different addressees, i.e., to another adult or a child in a multimodal demonstration task. While adults used
    more eye gaze towards their gestures with other adults than with children, they were more likely to use combined
    ostensive signals for children than for adults. Thus speakers mark the communicative relevance of their gestures with different types of ostensive signals and by taking different types of addressees into account.
  • Smalle, E., Szmalec, A., Bogaerts, L., Page, M. P. A., Narang, V., Misra, D., Araujo, S., Lohagun, N., Khan, O., Singh, A., Mishra, R. K., & Huettig, F. (2019). Literacy improves short-term serial recall of spoken verbal but not visuospatial items - Evidence from illiterate and literate adults. Cognition, 185, 144-150. doi:10.1016/j.cognition.2019.01.012.

    Abstract

    It is widely accepted that specific memory processes, such as serial-order memory, are involved in written language development and predictive of reading and spelling abilities. The reverse question, namely whether orthographic abilities also affect serial-order memory, has hardly been investigated. In the current study, we compared 20 illiterate people with a group of 20 literate matched controls on a verbal and a visuospatial version of the Hebb paradigm, measuring both short- and long-term serial-order memory abilities. We observed better short-term serial-recall performance for the literate compared with the illiterate people. This effect was stronger in the verbal than in the visuospatial modality, suggesting that the improved capacity of the literate group is a consequence of learning orthographic skills. The long-term consolidation of ordered information was comparable across groups, for both stimulus modalities. The implications of these findings for current views regarding the bi-directional interactions between memory and written language development are discussed.

    Additional information

    Supplementary material Datasets
  • De Smedt, K., Hinrichs, E., Meurers, D., Skadiņa, I., Sanford Pedersen, B., Navarretta, C., Bel, N., Lindén, K., Lopatková, M., Hajič, J., Andersen, G., & Lenkiewicz, P. (2014). CLARA: A new generation of researchers in common language resources and their applications. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 2166-2174).
  • De Smedt, K., & Kempen, G. (1987). Incremental sentence production, self-correction, and coordination. In G. Kempen (Ed.), Natural language generation: New results in artificial intelligence, psychology and linguistics (pp. 365-376). Dordrecht: Nijhoff.
  • Smeets, C. J. L. M., Jezierska, J., Watanabe, H., Duarri, A., Fokkens, M. R., Meijer, M., Zhou, Q., Yakovleva, T., Boddeke, E., den Dunnen, W., van Deursen, J., Bakalkin, G., Kampinga, H. H., van de Sluis, B., & S. Verbeek, D. (2015). Elevated mutant dynorphin A causes Purkinje cell loss and motor dysfunction in spinocerebellar ataxia type 23. Brain, 138(9), 2537-2552. doi:10.1093/brain/awv195.

    Abstract

    Spinocerebellar ataxia type 23 is caused by mutations in PDYN, which encodes the opioid neuropeptide precursor protein, prodynorphin. Prodynorphin is processed into the opioid peptides, α-neoendorphin, and dynorphins A and B, that normally exhibit opioid-receptor mediated actions in pain signalling and addiction. Dynorphin A is likely a mutational hotspot for spinocerebellar ataxia type 23 mutations, and in vitro data suggested that dynorphin A mutations lead to persistently elevated mutant peptide levels that are cytotoxic and may thus play a crucial role in the pathogenesis of spinocerebellar ataxia type 23. To further test this and study spinocerebellar ataxia type 23 in more detail, we generated a mouse carrying the spinocerebellar ataxia type 23 mutation R212W in PDYN. Analysis of peptide levels using a radioimmunoassay shows that these PDYNR212W mice display markedly elevated levels of mutant dynorphin A, which are associated with climber fibre retraction and Purkinje cell loss, visualized with immunohistochemical stainings. The PDYNR212W mice reproduced many of the clinical features of spinocerebellar ataxia type 23, with gait deficits starting at 3 months of age revealed by footprint pattern analysis, and progressive loss of motor coordination and balance at the age of 12 months demonstrated by declining performances on the accelerating Rotarod. The pathologically elevated mutant dynorphin A levels in the cerebellum coincided with transcriptionally dysregulated ionotropic and metabotropic glutamate receptors and glutamate transporters, and altered neuronal excitability. In conclusion, the PDYNR212W mouse is the first animal model of spinocerebellar ataxia type 23 and our work indicates that the elevated mutant dynorphin A peptide levels are likely responsible for the initiation and progression of the disease, affecting glutamatergic signalling, neuronal excitability, and motor performance. Our novel mouse model defines a critical role for opioid neuropeptides in spinocerebellar ataxia, and suggests that restoring the elevated mutant neuropeptide levels can be explored as a therapeutic intervention.
  • Smeets, C. J. L. M., & Verbeek, D. (2014). Review Cerebellar ataxia and functional genomics: Identifying the routes to cerebellar neurodegeneration. Biochimica et Biophysica Acta: BBA, 1842(10), 2030-2038. doi:10.1016/j.bbadis.2014.04.004.

    Abstract

    Cerebellar ataxias are progressive neurodegenerative disorders characterized by atrophy of the cerebellum leading to motor dysfunction, balance problems, and limb and gait ataxia. These include among others, the dominantly inherited spinocerebellar ataxias, recessive cerebellar ataxias such as Friedreich's ataxia, and X-linked cerebellar ataxias. Since all cerebellar ataxias display considerable overlap in their disease phenotypes, common pathological pathways must underlie the selective cerebellar neurodegeneration. Therefore, it is important to identify the molecular mechanisms and routes to neurodegeneration that cause cerebellar ataxia. In this review, we discuss the use of functional genomic approaches including whole-exome sequencing, genome-wide gene expression profiling, miRNA profiling, epigenetic profiling, and genetic modifier screens to reveal the underlying pathogenesis of various cerebellar ataxias. These approaches have resulted in the identification of many disease genes, modifier genes, and biomarkers correlating with specific stages of the disease. This article is part of a Special Issue entitled: From Genome to Function.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Examining strains and symptoms of the ‘Literacy Virus’: The effects of orthographic transparency on phonological processing in a connectionist model of reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    The effect of literacy on phonological processing has been described in terms of a virus that “infects all speech processing” (Frith, 1998). Empirical data has established that literacy leads to changes to the way in which phonological information is processed. Harm & Seidenberg (1999) demonstrated that a connectionist network trained to map between English orthographic and phonological representations display’s more componential phonological processing than a network trained only to stably represent the phonological forms of words. Within this study we use a similar model yet manipulate the transparency of orthographic-to-phonological mappings. We observe that networks trained on a transparent orthography are better at restoring phonetic features and phonemes. However, networks trained on non-transparent orthographies are more likely to restore corrupted phonological segments with legal, coarser linguistic units (e.g. onset, coda). Our study therefore provides an explicit description of how differences in orthographic transparency can lead to varying strains and symptoms of the ‘literacy virus’.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). A comprehensive model of spoken word recognition must be multimodal: Evidence from studies of language-mediated visual attention. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    When processing language, the cognitive system has access to information from a range of modalities (e.g. auditory, visual) to support language processing. Language mediated visual attention studies have shown sensitivity of the listener to phonological, visual, and semantic similarity when processing a word. In a computational model of language mediated visual attention, that models spoken word processing as the parallel integration of information from phonological, semantic and visual processing streams, we simulate such effects of competition within modalities. Our simulations raised untested predictions about stronger and earlier effects of visual and semantic similarity compared to phonological similarity around the rhyme of the word. Two visual world studies confirmed these predictions. The model and behavioral studies suggest that, during spoken word comprehension, multimodal information can be recruited rapidly to constrain lexical selection to the extent that phonological rhyme information may exert little influence on this process.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Modelling language – vision interactions in the hub and spoke framework. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes: Proceedings of the 13th Neural Computation and Psychology Workshop (NCPW13). (pp. 3-16). Singapore: World Scientific Publishing.

    Abstract

    Multimodal integration is a central characteristic of human cognition. However our understanding of the interaction between modalities and its influence on behaviour is still in its infancy. This paper examines the value of the Hub & Spoke framework (Plaut, 2002; Rogers et al., 2004; Dilkina et al., 2008; 2010) as a tool for exploring multimodal interaction in cognition. We present a Hub and Spoke model of language–vision information interaction and report the model’s ability to replicate a range of phonological, visual and semantic similarity word-level effects reported in the Visual World Paradigm (Cooper, 1974; Tanenhaus et al, 1995). The model provides an explicit connection between the percepts of language and the distribution of eye gaze and demonstrates the scope of the Hub-and-Spoke architectural framework by modelling new aspects of multimodal cognition.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Literacy effects on language and vision: Emergent effects from an amodal shared resource (ASR) computational model. Cognitive Psychology, 75, 28-54. doi:10.1016/j.cogpsych.2014.07.002.

    Abstract

    Learning to read and write requires an individual to connect additional orthographic representations to pre-existing mappings between phonological and semantic representations of words. Past empirical results suggest that the process of learning to read and write (at least in alphabetic languages) elicits changes in the language processing system, by either increasing the cognitive efficiency of mapping between representations associated with a word, or by changing the granularity of phonological processing of spoken language, or through a combination of both. Behavioural effects of literacy have typically been assessed in offline explicit tasks that have addressed only phonological processing. However, a recent eye tracking study compared high and low literate participants on effects of phonology and semantics in processing measured implicitly using eye movements. High literates’ eye movements were more affected by phonological overlap in online speech than low literates, with only subtle differences observed in semantics. We determined whether these effects were due to cognitive efficiency and/or granularity of speech processing in a multimodal model of speech processing – the amodal shared resource model (ASR, Smith, Monaghan, & Huettig, 2013). We found that cognitive efficiency in the model had only a marginal effect on semantic processing and did not affect performance for phonological processing, whereas fine-grained versus coarse-grained phonological representations in the model simulated the high/low literacy effects on phonological processing, suggesting that literacy has a focused effect in changing the grain-size of phonological mappings.
  • Smith, A. C. (2015). Modelling multimodal language processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Smits, R. (1998). A model for dependencies in phonetic categorization. Proceedings of the 16th International Congress on Acoustics and the 135th Meeting of the Acoustical Society of America, 2005-2006.

    Abstract

    A quantitative model of human categorization behavior is proposed, which can be applied to 4-alternative forced-choice categorization data involving two binary classifications. A number of processing dependencies between the two classifications are explicitly formulated, such as the dependence of the location, orientation, and steepness of the class boundary for one classification on the outcome of the other classification. The significance of various types of dependencies can be tested statistically. Analyses of a data set from the literature shows that interesting dependencies in human speech recognition can be uncovered using the model.
  • Smits, A., Seijdel, N., Scholte, H., Heywood, C., Kentridge, R., & de Haan, E. (2019). Action blindsight and antipointing in a hemianopic patient. Neuropsychologia, 128, 270-275. doi:10.1016/j.neuropsychologia.2018.03.029.

    Abstract

    Blindsight refers to the observation of residual visual abilities in the hemianopic field of patients without a functional V1. Given the within- and between-subject variability in the preserved abilities and the phenomenal experience of blindsight patients, the fine-grained description of the phenomenon is still debated. Here we tested a patient with established “perceptual” and “attentional” blindsight (c.f. Danckert and Rossetti, 2005). Using a pointing paradigm patient MS, who suffers from a complete left homonymous hemianopia, showed clear above chance manual localisation of ‘unseen’ targets. In addition, target presentations in his blind field led MS, on occasion, to spontaneous responses towards his sighted field. Structural and functional magnetic resonance imaging was conducted to evaluate the magnitude of V1 damage. Results revealed the presence of a calcarine sulcus in both hemispheres, yet his right V1 is reduced, structurally disconnected and shows no fMRI response to visual stimuli. Thus, visual stimulation of his blind field can lead to “action blindsight” and spontaneous antipointing, in absence of a functional right V1. With respect to the antipointing, we suggest that MS may have registered the stimulation and subsequently presumes it must have been in his intact half field.

    Additional information

    video
  • Smorenburg, L., Rodd, J., & Chen, A. (2015). The effect of explicit training on the prosodic production of L2 sarcasm by Dutch learners of English. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow, UK: University of Glasgow.

    Abstract

    Previous research [9] suggests that Dutch learners of (British) English are not able to express sarcasm prosodically in their L2. The present study investigates whether explicit training on the prosodic markers of sarcasm in English can improve learners’ realisation of sarcasm. Sarcastic speech was elicited in short simulated telephone conversations between Dutch advanced learners of English and a native British English-speaking ‘friend’ in two sessions, fourteen days apart. Between the two sessions, participants were trained by means of (1) a presentation, (2) directed independent practice, and (3) evaluation of participants’ production and individual feedback in small groups. L1 British English-speaking raters subsequently evaluated the degree of sarcastic sounding in the participants’ responses on a five-point scale. It was found that significantly higher sarcasm ratings were given to L2 learners’ production obtained after the training than that obtained before the training; explicit training on prosody has a positive effect on learners’ production of sarcasm.
  • Snijders Blok, L., Kleefstra, T., Venselaar, H., Maas, S., Kroes, H. Y., Lachmeijer, A. M. A., Van Gassen, K. L. I., Firth, H. V., Tomkins, S., Bodek, S., The DDD Study, Õunap, K., Wojcik, M. H., Cunniff, C., Bergstrom, K., Powis, Z., Tang, S., Shinde, D. N., Au, C., Iglesias, A. D., Izumi, K. and 18 moreSnijders Blok, L., Kleefstra, T., Venselaar, H., Maas, S., Kroes, H. Y., Lachmeijer, A. M. A., Van Gassen, K. L. I., Firth, H. V., Tomkins, S., Bodek, S., The DDD Study, Õunap, K., Wojcik, M. H., Cunniff, C., Bergstrom, K., Powis, Z., Tang, S., Shinde, D. N., Au, C., Iglesias, A. D., Izumi, K., Leonard, J., Tayoun, A. A., Baker, S. W., Tartaglia, M., Niceta, M., Dentici, M. L., Okamoto, N., Miyake, N., Matsumoto, N., Vitobello, A., Faivre, L., Philippe, C., Gilissen, C., Wiel, L., Pfundt, R., Derizioti, P., Brunner, H. G., & Fisher, S. E. (2019). De novo variants disturbing the transactivation capacity of POU3F3 cause a characteristic neurodevelopmental disorder. The American Journal of Human Genetics, 105(2), 403-412. doi:10.1016/j.ajhg.2019.06.007.

    Abstract

    POU3F3, also referred to as Brain-1, is a well-known transcription factor involved in the development of the central nervous system, but it has not previously been associated with a neurodevelopmental disorder. Here, we report the identification of 19 individuals with heterozygous POU3F3 disruptions, most of which are de novo variants. All individuals had developmental delays and/or intellectual disability and impairments in speech and language skills. Thirteen individuals had characteristic low-set, prominent, and/or cupped ears. Brain abnormalities were observed in seven of eleven MRI reports. POU3F3 is an intronless gene, insensitive to nonsense-mediated decay, and 13 individuals carried protein-truncating variants. All truncating variants that we tested in cellular models led to aberrant subcellular localization of the encoded protein. Luciferase assays demonstrated negative effects of these alleles on transcriptional activation of a reporter with a FOXP2-derived binding motif. In addition to the loss-of-function variants, five individuals had missense variants that clustered at specific positions within the functional domains, and one small in-frame deletion was identified. Two missense variants showed reduced transactivation capacity in our assays, whereas one variant displayed gain-of-function effects, suggesting a distinct pathophysiological mechanism. In bioluminescence resonance energy transfer (BRET) interaction assays, all the truncated POU3F3 versions that we tested had significantly impaired dimerization capacities, whereas all missense variants showed unaffected dimerization with wild-type POU3F3. Taken together, our identification and functional cell-based analyses of pathogenic variants in POU3F3, coupled with a clinical characterization, implicate disruptions of this gene in a characteristic neurodevelopmental disorder.
  • Soares, S. M. P., Ong, G., Abutalebi, J., Del Maschio, N., Sewell, D., & Weekes, B. (2019). A diffusion model approach to analyzing performance on the flanker task: the role of the DLPFC. Bilingualism: Language and Cognition, 22(5), 1194-1208. doi:10.1017/S1366728918000974.

    Abstract

    The anterior cingulate cortex (ACC) and the dorsolateral prefrontal cortex (DLPFC) are involved in conflict detection and
    conflict resolution, respectively. Here, we investigate how lifelong bilingualism induces neuroplasticity to these structures by
    employing a novel analysis of behavioural performance. We correlated grey matter volume (GMV) in seniors reported by
    Abutalebi et al. (2015) with behavioral Flanker task performance fitted using the diffusion model (Ratcliff, 1978). As
    predicted, we observed significant correlations between GMV in the DLPFC and Flanker performance. However, for
    monolinguals the non-decision time parameter was significantly correlated with GMV in the left DLPFC, whereas for
    bilinguals the correlation was significant in the right DLPFC. We also found a significant correlation between age and GMV
    in left DLPFC and the non-decision time parameter for the conflict effect for monolinguals only.
    We submit that this is due to cumulative demands on cognitive control over a lifetime of bilingual language processing
  • Solberg Økland, H., Todorović, A., Lüttke, C. S., McQueen, J. M., & De Lange, F. P. (2019). Combined predictive effects of sentential and visual constraints in early audiovisual speech processing. Scientific Reports, 9: 7870. doi:10.1038/s41598-019-44311-2.

    Abstract

    In language comprehension, a variety of contextual cues act in unison to render upcoming words more or less predictable. As a sentence unfolds, we use prior context (sentential constraints) to predict what the next words might be. Additionally, in a conversation, we can predict upcoming sounds through observing the mouth movements of a speaker (visual constraints). In electrophysiological studies, effects of visual constraints have typically been observed early in language processing, while effects of sentential constraints have typically been observed later. We hypothesized that the visual and the sentential constraints might feed into the same predictive process such that effects of sentential constraints might also be detectable early in language processing through modulations of the early effects of visual salience. We presented participants with audiovisual speech while recording their brain activity with magnetoencephalography. Participants saw videos of a person saying sentences where the last word was either sententially constrained or not, and began with a salient or non-salient mouth movement. We found that sentential constraints indeed exerted an early (N1) influence on language processing. Sentential modulations of the N1 visual predictability effect were visible in brain areas associated with semantic processing, and were differently expressed in the two hemispheres. In the left hemisphere, visual and sentential constraints jointly suppressed the auditory evoked field, while the right hemisphere was sensitive to visual constraints only in the absence of strong sentential constraints. These results suggest that sentential and visual constraints can jointly influence even very early stages of audiovisual speech comprehension.
  • Sollis, E. (2019). A network of interacting proteins disrupted in language-related disorders. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Sonnweber, R., Ravignani, A., & Fitch, W. T. (2015). Non-adjacent visual dependency learning in chimpanzees. Animal Cognition, 18(3), 733-745. doi:10.1007/s10071-015-0840-x.

    Abstract

    Humans have a strong proclivity for structuring and patterning stimuli: Whether in space or time, we tend to mentally order stimuli in our environment and organize them into units with specific types of relationships. A crucial prerequisite for such organization is the cognitive ability to discern and process regularities among multiple stimuli. To investigate the evolutionary roots of this cognitive capacity, we tested chimpanzees—which, along with bonobos, are our closest living relatives—for simple, variable distance dependency processing in visual patterns. We trained chimpanzees to identify pairs of shapes either linked by an arbitrary learned association (arbitrary associative dependency) or a shared feature (same shape, feature-based dependency), and to recognize strings where items related to either of these ways occupied the first (leftmost) and the last (rightmost) item of the stimulus. We then probed the degree to which subjects generalized this pattern to new colors, shapes, and numbers of interspersed items. We found that chimpanzees can learn and generalize both types of dependency rules, indicating that the ability to encode both feature-based and arbitrary associative regularities over variable distances in the visual domain is not a human prerogative. Our results strongly suggest that these core components of human structural processing were already present in our last common ancestor with chimpanzees.

    Additional information

    supplementary material
  • Sonnweber, R. S., Ravignani, A., Stobbe, N., Schiestl, G., Wallner, B., & Fitch, W. T. (2015). Rank‐dependent grooming patterns and cortisol alleviation in Barbary macaques. American Journal of Primatology, 77(6), 688-700. doi:10.1002/ajp.22391.

    Abstract

    Flexibly adapting social behavior to social and environmental challenges helps to alleviate glucocorticoid (GC) levels, which may have positive fitness implications for an individual. For primates, the predominant social behavior is grooming. Giving grooming to others is particularly efficient in terms of GC mitigation. However, grooming is confined by certain limitations such as time constraints or restricted access to other group members. For instance, dominance hierarchies may impact grooming partner availability in primate societies. Consequently specific grooming patterns emerge. In despotic species focusing grooming activity on preferred social partners significantly ameliorates GC levels in females of all ranks. In this study we investigated grooming patterns and GC management in Barbary macaques, a comparably relaxed species. We monitored changes in grooming behavior and cortisol (C) for females of different ranks. Our results show that the C‐amelioration associated with different grooming patterns had a gradual connection with dominance hierarchy: while higher‐ranking individuals showed lowest urinary C measures when they focused their grooming on selected partners within their social network, lower‐ranking individuals expressed lowest C levels when dispersing their grooming activity evenly across their social partners. We argue that the relatively relaxed social style of Barbary macaque societies allows individuals to flexibly adapt grooming patterns, which is associated with rank‐specific GC management. Am. J. Primatol. 77:688–700, 2015
  • Sotaro, K., & Dickey, L. W. (Eds.). (1998). Max Planck Institute for Psycholinguistics: Annual report 1998. Nijmegen: Max Planck Institute for Psycholinguistics.
  • De Sousa, H., Langella, F., & Enfield, N. J. (2015). Temperature terms in Lao, Southern Zhuang, Southern Pinghua and Cantonese. In M. Koptjevskaja-Tamm (Ed.), The linguistics of temperature (pp. 594-638). Amsterdam: Benjamins.
  • Spada, D., Verga, L., Iadanza, A., Tettamanti, M., & Perani, D. (2014). The auditory scene: An fMRI study on melody and accompaniment in professional pianists. NeuroImage, 102(2), 764-775. doi:10.1016/j.neuroimage.2014.08.036.

    Abstract

    The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes.
  • Spaeth, J. M., Hunter, C. S., Bonatakis, L., Guo, M., French, C. A., Slack, I., Hara, M., Fisher, S. E., Ferrer, J., Morrisey, E. E., Stanger, B. Z., & Stein, R. (2015). The FOXP1, FOXP2 and FOXP4 transcription factors are required for islet alpha cell proliferation and function in mice. Diabetologia, 58, 1836-1844. doi:10.1007/s00125-015-3635-3.

    Abstract

    Aims/hypothesis Several forkhead box (FOX) transcription factor family members have important roles in controlling pancreatic cell fates and maintaining beta cell mass and function, including FOXA1, FOXA2 and FOXM1. In this study we have examined the importance of FOXP1, FOXP2 and FOXP4 of the FOXP subfamily in islet cell development and function. Methods Mice harbouring floxed alleles for Foxp1, Foxp2 and Foxp4 were crossed with pan-endocrine Pax6-Cre transgenic mice to generate single and compound Foxp mutant mice. Mice were monitored for changes in glucose tolerance by IPGTT, serum insulin and glucagon levels by radioimmunoassay, and endocrine cell development and proliferation by immunohistochemistry. Gene expression and glucose-stimulated hormone secretion experiments were performed with isolated islets. Results Only the triple-compound Foxp1/2/4 conditional knockout (cKO) mutant had an overt islet phenotype, manifested physiologically by hypoglycaemia and hypoglucagonaemia. This resulted from the reduction in glucagon-secreting alpha cell mass and function. The proliferation of alpha cells was profoundly reduced in Foxp1/2/4 cKO islets through the effects on mediators of replication (i.e. decreased Ccna2, Ccnb1 and Ccnd2 activators, and increased Cdkn1a inhibitor). Adult islet Foxp1/2/4 cKO beta cells secrete insulin normally while the remaining alpha cells have impaired glucagon secretion. Conclusions/interpretation Collectively, these findings reveal an important role for the FOXP1, 2, and 4 proteins in governing postnatal alpha cell expansion and function.
  • Spapé, M., Verdonschot, R. G., & Van Steenbergen, H. (2019). The E-Primer: An introduction to creating psychological experiments in E-Prime® (2nd ed. updated for E-Prime 3). Leiden: Leiden University Press.

    Abstract

    E-Prime® is the leading software suite by Psychology Software Tools for designing and running Psychology lab experiments. The E-Primer is the perfect accompanying guide: It provides all the necessary knowledge to make E-Prime accessible to everyone. You can learn the tools of Psychological science by following the E-Primer through a series of entertaining, step-by-step recipes that recreate classic experiments. The updated E-Primer expands its proven combination of simple explanations, interesting tutorials and fun exercises, and makes even the novice student quickly confident to create their dream experiment.
  • Spapé, M., Verdonschot, R. G., Van Dantzig, S., & Van Steenbergen, H. (2014). The E-Primer: An introduction to creating psychological experiments in E-Prime®. Leiden: Leiden University Press.

    Abstract

    E-Prime, the software suite by Psychology Software Tools, is used for designing, developing and running custom psychological experiments. Aimed at students and researchers alike, this book provides a much needed, down-to-earth introduction into a wide range of experiments that can be set up using E-Prime. Many tutorials are provided to teach the reader how to develop experiments typical for the broad fields of psychological and cognitive science. Apart from explaining the basic structure of E-Prime and describing how it fits into daily scientific practice, this book also offers an introduction into programming using E-Prime’s own E-Basic language. The authors guide the readers step-by-step through the software, from an elementary to an advanced level, enabling them to benefit from the enormous possibilities for experimental design offered by E-Prime.
  • Speed, L., & Majid, A. (2019). Linguistic features of fragrances: The role of grammatical gender and gender associations. Attention, Perception & Psychophysics, 81(6), 2063-2077. doi:10.3758/s13414-019-01729-0.

    Abstract

    Odors are often difficult to identify and name, which leaves them vulnerable to the influence of language. The present study tests the boundaries of the effect of language on odor cognition by examining the effect of grammatical gender. We presented participants with male and female fragrances paired with descriptions of masculine or feminine grammatical gender. In Experiment 1 we found that memory for fragrances was enhanced when the grammatical gender of a fragrance description matched the gender of the fragrance. In Experiment 2 we found memory for fragrances was affected by both grammatical gender and gender associations in fragrance descriptions – recognition memory for odors was higher when the gender was incongruent. In sum, we demonstrated that even subtle aspects of language can affect odor cognition.

    Additional information

    Supplementary material
  • Speed, L. J., O'Meara, C., San Roque, L., & Majid, A. (Eds.). (2019). Perception Metaphors. Amsterdam: Benjamins.

    Abstract

    Metaphor allows us to think and talk about one thing in terms of another, ratcheting up our cognitive and expressive capacity. It gives us concrete terms for abstract phenomena, for example, ideas become things we can grasp or let go of. Perceptual experience—characterised as physical and relatively concrete—should be an ideal source domain in metaphor, and a less likely target. But is this the case across diverse languages? And are some sensory modalities perhaps more concrete than others? This volume presents critical new data on perception metaphors from over 40 languages, including many which are under-studied. Aside from the wealth of data from diverse languages—modern and historical; spoken and signed—a variety of methods (e.g., natural language corpora, experimental) and theoretical approaches are brought together. This collection highlights how perception metaphor can offer both a bedrock of common experience and a source of continuing innovation in human communication
  • Stergiakouli, E., Gaillard, R., Tavaré, J. M., Balthasar, N., Loos, R. J., Taal, H. R., Evans, D. M., Rivadeneira, F., St Pourcain, B., Uitterlinden, A. G., Kemp, J. P., Hofman, A., Ring, S. M., Cole, T. J., Jaddoe, V. W. V., Davey Smith, G., & Timpson, N. J. (2014). Genome-wide association study of height-adjusted BMI in childhood identifies functional variant in ADCY3. Obesity, 22(10), 2252-2259. doi:10.1002/oby.20840.

    Abstract

    OBJECTIVE: Genome-wide association studies (GWAS) of BMI are mostly undertaken under the assumption that "kg/m(2) " is an index of weight fully adjusted for height, but in general this is not true. The aim here was to assess the contribution of common genetic variation to a adjusted version of that phenotype which appropriately accounts for covariation in height in children. METHODS: A GWAS of height-adjusted BMI (BMI[x] = weight/height(x) ), calculated to be uncorrelated with height, in 5809 participants (mean age 9.9 years) from the Avon Longitudinal Study of Parents and Children (ALSPAC) was performed. RESULTS: GWAS based on BMI[x] yielded marked differences in genomewide results profile. SNPs in ADCY3 (adenylate cyclase 3) were associated at genome-wide significance level (rs11676272 (0.28 kg/m(3.1) change per allele G (0.19, 0.38), P = 6 × 10(-9) ). In contrast, they showed marginal evidence of association with conventional BMI [rs11676272 (0.25 kg/m(2) (0.15, 0.35), P = 6 × 10(-7) )]. Results were replicated in an independent sample, the Generation R study. CONCLUSIONS: Analysis of BMI[x] showed differences to that of conventional BMI. The association signal at ADCY3 appeared to be driven by a missense variant and it was strongly correlated with expression of this gene. Our work highlights the importance of well understood phenotype use (and the danger of convention) in characterising genetic contributions to complex traits.

    Additional information

    oby20840-sup-0001-suppinfo.docx
  • Stergiakouli, E., Martin, J., Hamshere, M. L., Langley, K., Evans, D. M., St Pourcain, B., Timpson, N. J., Owen, M. J., O'Donovan, M., Thapar, A., & Davey Smith, G. (2015). Shared Genetic Influences Between Attention-Deficit/Hyperactivity Disorder (ADHD) Traits in Children and Clinical ADHD. Journal of the American Academy of Child and Adolescent Psychiatry, 54(4), 322-327. doi:10.1016/j.jaac.2015.01.010.
  • Stine-Morrow, E., Payne, B., Roberts, B., Kramer, A., Morrow, D., Payne, L., Hill, P., Jackson, J., Gao, X., Noh, S., Janke, M., & Parisi, J. (2014). Training versus engagement as paths to cognitive enrichment with aging. Psychology and Aging, 29, 891-906. doi:10.1037/a0038244.

    Abstract

    While a training model of cognitive intervention targets the improvement of particular skills through instruction and practice, an engagement model is based on the idea that being embedded in an intellectually and socially complex environment can impact cognition, perhaps even broadly, without explicit instruction. We contrasted these 2 models of cognitive enrichment by randomly assigning healthy older adults to a home-based inductive reasoning training program, a team-based competitive program in creative problem solving, or a wait-list control. As predicted, those in the training condition showed selective improvement in inductive reasoning. Those in the engagement condition, on the other hand, showed selective improvement in divergent thinking, a key ability exercised in creative problem solving. On average, then, both groups appeared to show ability-specific effects. However, moderators of change differed somewhat for those in the engagement and training interventions. Generally, those who started either intervention with a more positive cognitive profile showed more cognitive growth, suggesting that cognitive resources enabled individuals to take advantage of environmental enrichment. Only in the engagement condition did initial levels of openness and social network size moderate intervention effects on cognition, suggesting that comfort with novelty and an ability to manage social resources may be additional factors contributing to the capacity to take advantage of the environmental complexity associated with engagement. Collectively, these findings suggest that training and engagement models may offer alternative routes to cognitive resilience in late life

    Files private

    Request files
  • Stivers, T. (2004). Potilaan vastarinta: Keino vaikuttaa lääkärin hoitopäätökseen. Sosiaalilääketieteellinen Aikakauslehti, 41, 199-213.
  • Stivers, T. (2004). "No no no" and other types of multiple sayings in social interaction. Human Communication Research, 30(2), 260-293. doi:10.1111/j.1468-2958.2004.tb00733.x.

    Abstract

    Relying on the methodology of conversation analysis, this article examines a practice in ordinary conversation characterized by the resaying of a word, phrase, or sentence. The article shows that multiple sayings such as "No no no" or "Alright alright alright" are systematic in both their positioning relative to the interlocutor's talk and in their function. Specifically, the findings are that multiple sayings are a resource speakers have to display that their turn is addressing an in progress course of action rather than only the just prior utterance. Speakers of multiple sayings communicate their stance that the prior speaker has persisted unnecessarily in the prior course of action and should properly halt course of action.
  • Stivers, T. (2004). Question sequences in interaction. In A. Majid (Ed.), Field Manual Volume 9 (pp. 45-47). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506967.

    Abstract

    When people request information, they have a variety of means for eliciting the information. In English two of the primary resources for eliciting information include asking questions, making statements about their interlocutor (thereby generating confirmation or revision). But within these types there are a variety of ways that these information elicitors can be designed. The goal of this task is to examine how different languages seek and provide information, the extent to which syntax vs prosodic resources are used (e.g., in questions), and the extent to which the design of information seeking actions and their responses display a structural preference to promote social solidarity.
  • Stivers, T. (1998). Prediagnostic commentary in veterinarian-client interaction. Research on Language and Social Interaction, 31(2), 241-277. doi:10.1207/s15327973rlsi3102_4.
  • Stoehr, A., Benders, T., Van Hell, J. G., & Fikkert, P. (2019). Bilingual preschoolers’ speech is associated with non-native maternal language input. Language Learning and Development, 15(1), 75-100. doi:10.1080/15475441.2018.1533473.

    Abstract

    Bilingual children are often exposed to non-native speech through their parents. Yet, little is known about the relation between bilingual preschoolers’ speech production and their speech input. The present study investigated the production of voice onset time (VOT) by Dutch-German bilingual preschoolers and their sequential bilingual mothers. The findings reveal an association between maternal VOT and bilingual children’s VOT in the heritage language German as well as in the majority language Dutch. By contrast, no input-production association was observed in the VOT production of monolingual German-speaking children and monolingual Dutch-speaking children. The results of this study provide the first empirical evidence that non-native and attrited maternal speech contributes to the often-observed linguistic differences between bilingual children and their monolingual peers.
  • Stolk, A., Noordzij, M. L., Verhagen, L., Volman, I., Schoffelen, J.-M., Oostenveld, R., Hagoort, P., & Toni, I. (2014). Cerebral coherence between communicators marks the emergence of meaning. Proceedings of the National Academy of Sciences of the United States of America, 111, 18183-18188. doi:10.1073/pnas.1414886111.

    Abstract

    How can we understand each other during communicative interactions? An influential suggestion holds that communicators are primed by each other’s behaviors, with associative mechanisms automatically coordinating the production of communicative signals and the comprehension of their meanings. An alternative suggestion posits that mutual understanding requires shared conceptualizations of a signal’s use, i.e., “conceptual pacts” that are abstracted away from specific experiences. Both accounts predict coherent neural dynamics across communicators, aligned either to the occurrence of a signal or to the dynamics of conceptual pacts. Using coherence spectral-density analysis of cerebral activity simultaneously measured in pairs of communicators, this study shows that establishing mutual understanding of novel signals synchronizes cerebral dynamics across communicators’ right temporal lobes. This interpersonal cerebral coherence occurred only within pairs with a shared communicative history, and at temporal scales independent from signals’ occurrences. These findings favor the notion that meaning emerges from shared conceptualizations of a signal’s use.
  • Stolk, A., Noordzij, M. L., Volman, I., Verhagen, L., Overeem, S., van Elswijk, G., Bloem, B., Hagoort, P., & Toni, I. (2014). Understanding communicative actions: A repetitive TMS study. Cortex, 51, 25-34. doi:10.1016/j.cortex.2013.10.005.

    Abstract

    Despite the ambiguity inherent in human communication, people are remarkably efficient in establishing mutual understanding. Studying how people communicate in novel settings provides a window into the mechanisms supporting the human competence to rapidly generate and understand novel shared symbols, a fundamental property of human communication. Previous work indicates that the right posterior superior temporal sulcus (pSTS) is involved when people understand the intended meaning of novel communicative actions. Here, we set out to test whether normal functioning of this cerebral structure is required for understanding novel communicative actions using inhibitory low-frequency repetitive transcranial magnetic stimulation (rTMS). A factorial experimental design contrasted two tightly matched stimulation sites (right pSTS vs. left MT+, i.e. a contiguous homotopic task-relevant region) and tasks (a communicative task vs. a visual tracking task that used the same sequences of stimuli). Overall task performance was not affected by rTMS, whereas changes in task performance over time were disrupted according to TMS site and task combinations. Namely, rTMS over pSTS led to a diminished ability to improve action understanding on the basis of recent communicative history, while rTMS over MT+ perturbed improvement in visual tracking over trials. These findings qualify the contributions of the right pSTS to human communicative abilities, showing that this region might be necessary for incorporating previous knowledge, accumulated during interactions with a communicative partner, to constrain the inferential process that leads to action understanding.
  • Stolker, C. J. J. M., & Poletiek, F. H. (1998). Smartengeld - Wat zijn we eigenlijk aan het doen? Naar een juridische en psychologische evaluatie. In F. Stadermann (Ed.), Bewijs en letselschade (pp. 71-86). Lelystad, The Netherlands: Koninklijke Vermande.
  • Sumer, B. (2015). Acquisition of spatial language by signing and speaking children: A comparison of Turkish Sign Language (TID) and Turkish. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Sumer, B., Perniss, P., Zwitserlood, I., & Ozyurek, A. (2014). Learning to express "left-right" & "front-behind" in a sign versus spoken language. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1550-1555). Austin, Tx: Cognitive Science Society.

    Abstract

    Developmental studies show that it takes longer for
    children learning spoken languages to acquire viewpointdependent
    spatial relations (e.g., left-right, front-behind),
    compared to ones that are not viewpoint-dependent (e.g.,
    in, on, under). The current study investigates how
    children learn to express viewpoint-dependent relations
    in a sign language where depicted spatial relations can be
    communicated in an analogue manner in the space in
    front of the body or by using body-anchored signs (e.g.,
    tapping the right and left hand/arm to mean left and
    right). Our results indicate that the visual-spatial
    modality might have a facilitating effect on learning to
    express these spatial relations (especially in encoding of
    left-right) in a sign language (i.e., Turkish Sign
    Language) compared to a spoken language (i.e.,
    Turkish).
  • Suppes, P., Böttner, M., & Liang, L. (1998). Machine Learning of Physics Word Problems: A Preliminary Report. In A. Aliseda, R. van Glabbeek, & D. Westerståhl (Eds.), Computing Natural Language (pp. 141-154). Stanford, CA, USA: CSLI Publications.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1998). Understanding ambiguous words in sentence contexts: Electrophysiological evidence for delayed contextual selection in Broca's aphasia. Neuropsychologia, 36(8), 737-761. doi:10.1016/S0028-3932(97)00174-7.

    Abstract

    This study investigates whether spoken sentence comprehension deficits in Broca's aphasics results from their inability to access the subordinate meaning of ambiguous words (e.g. bank), or alternatively, from a delay in their selection of the contextually appropriate meaning. Twelve Broca's aphasics and twelve elderly controls were presented with lexical ambiguities in three context conditions, each followed by the same target words. In the concordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was related to the target. In the discordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was incompatible with the target.In the unrelated condition, the sentence-final word was unambiguous and unrelated to the target. The task of the subjects was to listen attentively to the stimuli The activational status of the ambiguous sentence-final words was inferred from the amplitude of the N399 to the targets at two inter-stimulus intervals (ISIs) (100 ms and 1250 ms). At the short ISI, the Broca's aphasics showed clear evidence of activation of the subordinate meaning. In contrast to elderly controls, however, the Broca's aphasics were not successful at selecting the appropriate meaning of the ambiguity in the short ISI version of the experiment. But at the long ISI, in accordance with the performance of the elderly controls, the patients were able to successfully complete the contextual selection process. These results indicate that Broca's aphasics are delayed in the process of contextual selection. It is argued that this finding of delayed selection is compatible with the idea that comprehension deficits in Broca's aphasia result from a delay in the process of integrating lexical information.
  • De Swart, P., & Van Bergen, G. (2019). How animacy and verbal information influence V2 sentence processing: Evidence from eye movements. Open Linguistics, 5(1), 630-649. doi:10.1515/opli-2019-0035.

    Abstract

    There exists a clear association between animacy and the grammatical function of transitive subject. The grammar of some languages require the transitive subject to be high in animacy, or at least higher than the object. A similar animacy preference has been observed in processing studies in languages without such a categorical animacy effect. This animacy preference has been mainly established in structures in which either one or both arguments are provided before the verb. Our goal was to establish (i) whether this preference can already be observed before any argument is provided, and (ii) whether this preference is mediated by verbal information. To this end we exploited the V2 property of Dutch which allows the verb to precede its arguments. Using a visual-world eye-tracking paradigm we presented participants with V2 structures with either an auxiliary (e.g. Gisteren heeft X … ‘Yesterday, X has …’) or a lexical main verb (e.g. Gisteren motiveerde X … ‘Yesterday, X motivated …’) and we measured looks to the animate referent. The results indicate that the animacy preference can already be observed before arguments are presented and that the selectional restrictions of the verb mediate this bias, but do not override it completely.
  • De Swart, P., & Van Bergen, G. (2014). Unscrambling the lexical nature of weak definites. In A. Aguilar-Guevara, B. Le Bruyn, & J. Zwarts (Eds.), Weak referentiality (pp. 287-310). Amsterdam: Benjamins.

    Abstract

    We investigate how the lexical nature of weak definites influences the phenomenon of direct object scrambling in Dutch. Earlier experiments have indicated that weak definites are more resistant to scrambling than strong definites. We examine how the notion of weak definiteness used in this experimental work can be reduced to lexical connectedness. We explore four different ways of quantifying the relation between a direct object and the verb. Our results show that predictability of a verb given the object (verb cloze probability) provides the best fit to the weak/strong distinction used in the earlier experiments
  • Sweegers, C. C. G., Takashima, A., Fernández, G., & Talamini, L. M. (2015). Neural mechanisms supporting the extraction of general knowledge across episodic memories. NeuroImage, 87, 138-146. doi:10.1016/j.neuroimage.2013.10.063.

    Abstract

    General knowledge acquisition entails the extraction of statistical regularities from the environment. At high levels of complexity, this may involve the extraction, and consolidation, of associative regularities across event memories. The underlying neural mechanisms would likely involve a hippocampo-neocortical dialog, as proposed previously for system-level consolidation. To test these hypotheses, we assessed possible differences in consolidation between associative memories containing cross-episodic regularities and unique associative memories. Subjects learned face–location associations, half of which responded to complex regularities regarding the combination of facial features and locations, whereas the other half did not. Importantly, regularities could only be extracted over hippocampus-encoded, associative aspects of the items. Memory was assessed both immediately after encoding and 48 h later, under fMRI acquisition. Our results suggest that processes related to system-level reorganization occur preferentially for regular associations across episodes. Moreover, the build-up of general knowledge regarding regular associations appears to involve the coordinated activity of the hippocampus and mediofrontal regions. The putative cross-talk between these two regions might support a mechanism for regularity extraction. These findings suggest that the consolidation of cross-episodic regularities may be a key mechanism underlying general knowledge acquisition.
  • Swift, M. (1998). [Book review of LOUIS-JACQUES DORAIS, La parole inuit: Langue, culture et société dans l'Arctique nord-américain]. Language in Society, 27, 273-276. doi:10.1017/S0047404598282042.

    Abstract

    This volume on Inuit speech follows the evolution of a native language of the North American Arctic, from its historical roots to its present-day linguistic structure and patterns of use from Alaska to Greenland. Drawing on a wide range of research from the fields of linguistics, anthropology, and sociology, Dorais integrates these diverse perspectives in a comprehensive view of native language development, maintenance, and use under conditions of marginalization due to social transition.
  • Takashima, A., Wagensveld, B., Van Turennout, M., Zwitserlood, P., Hagoort, P., & Verhoeven, L. (2014). Training-induced neural plasticity in visual-word decoding and the role of syllables. Neuropsychologia, 61, 299-314. doi:10.1016/j.neuropsychologia.2014.06.017.

    Abstract

    To investigate the neural underpinnings of word decoding, and how it changes as a function of repeated exposure, we trained Dutch participants repeatedly over the course of a month of training to articulate a set of novel disyllabic input strings written in Greek script to avoid the use of familiar orthographic representations. The syllables in the input were phonotactically legal combinations but non-existent in the Dutch language, allowing us to assess their role in novel word decoding. Not only trained disyllabic pseudowords were tested but also pseudowords with recombined patterns of syllables to uncover the emergence of syllabic representations. We showed that with extensive training, articulation became faster and more accurate for the trained pseudowords. On the neural level, the initial stage of decoding was reflected by increased activity in visual attention areas of occipito-temporal and occipito-parietal cortices, and in motor coordination areas of the precentral gyrus and the inferior frontal gyrus. After one month of training, memory representations for holistic information (whole word unit) were established in areas encompassing the angular gyrus, the precuneus and the middle temporal gyrus. Syllabic representations also emerged through repeated training of disyllabic pseudowords, such that reading recombined syllables of the trained pseudowords showed similar brain activation to trained pseudowords and were articulated faster than novel combinations of letter strings used in the trained pseudowords.
  • Takashima, A., Bakker-Marshall, I., Van Hell, J. G., McQueen, J. M., & Janzen, G. (2019). Neural correlates of word learning in children. Developmental Cognitive Neuroscience, 37: 100647. doi:10.1016/j.dcn.2019.100649.

    Abstract

    Memory representations of words are thought to undergo changes with consolidation: Episodic memories of novel words are transformed into lexical representations that interact with other words in the mental dictionary. Behavioral studies have shown that this lexical integration process is enhanced when there is more time for consolidation. Neuroimaging studies have further revealed that novel word representations are initially represented in a hippocampally-centered system, whereas left posterior middle temporal cortex activation increases with lexicalization. In this study, we measured behavioral and brain responses to newly-learned words in children. Two groups of Dutch children, aged between 8-10 and 14-16 years, were trained on 30 novel Japanese words depicting novel concepts. Children were tested on word-forms, word-meanings, and the novel words’ influence on existing word processing immediately after training, and again after a week. In line with the adult findings, hippocampal involvement decreased with time. Lexical integration, however, was not observed immediately or after a week, neither behaviorally nor neurally. It appears that time alone is not always sufficient for lexical integration to occur. We suggest that other factors (e.g., the novelty of the concepts and familiarity with the language the words are derived from) might also influence the integration process.

    Additional information

    Supplementary data
  • Takashima, A., & Verhoeven, L. (2019). Radical repetition effects in beginning learners of Chinese as a foreign language reading. Journal of Neurolinguistics, 50, 71-81. doi:10.1016/j.jneuroling.2018.03.001.

    Abstract

    The aim of the present study was to examine whether repetition of radicals during training of Chinese characters leads to better word acquisition performance in beginning learners of Chinese as a foreign language. Thirty Dutch university students were trained on 36 Chinese one-character words for their pronunciations and meanings. They were also exposed to the specifics of the radicals, that is, for phonetic radicals, the associated pronunciation was explained, and for semantic radicals the associated categorical meanings were explained. Results showed that repeated exposure to phonetic and semantic radicals through character pronunciation and meaning trainings indeed induced better understanding of those radicals that were shared among different characters. Furthermore, characters in the training set that shared phonetic radicals were pronounced better than those that did not. Repetition of semantic radicals across different characters, however, hindered the learning of exact meanings. Students generally confused the meanings of other characters that shared the semantic radical. The study shows that in the initial stage of learning, overlapping information of the shared radicals are effectively learned. Acquisition of the specifics of individual characters, however, requires more training.

    Additional information

    Supplementary data
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2014). Richness of information about novel words influences how episodic and semantic memory networks interact during lexicalization. NeuroImage, 84, 265-278. doi:10.1016/j.neuroimage.2013.08.023.

    Abstract

    The complementary learning systems account of declarative memory suggests two distinct memory networks, a fast-mapping, episodic system involving the hippocampus, and a slower semantic memory system distributed across the neocortex in which new information is gradually integrated with existing representations. In this study, we investigated the extent to which these two networks are involved in the integration of novel words into the lexicon after extensive learning, and how the involvement of these networks changes after 24 hours. In particular, we explored whether having richer information at encoding influences the lexicalization trajectory. We trained participants with two sets of novel words, one where exposure was only to the words’ phonological forms (the form-only condition), and one where pictures of unfamiliar objects were associated with the words’ phonological forms (the picture-associated condition). A behavioral measure of lexical competition (indexing lexicalization) indicated stronger competition effects for the form-only words. Imaging (fMRI) results revealed greater involvement of phonological lexical processing areas immediately after training in the form-only condition, suggesting tight connections were formed between novel words and existing lexical entries already at encoding. Retrieval of picture-associated novel words involved the episodic/hippocampal memory system more extensively. Although lexicalization was weaker in the picture-associated condition, overall memory strength was greater when tested after a 24 hours’ delay, probably due to the availability of both episodic and lexical memory networks to aid retrieval. It appears that, during lexicalization of a novel word, the relative involvement of different memory networks differs according to the richness of the information about that word available at encoding.
  • Tamaoka, K., Saito, N., Kiyama, S., Timmer, K., & Verdonschot, R. G. (2014). Is pitch accent necessary for comprehension by native Japanese speakers? - An ERP investigation. Journal of Neurolinguistics, 27(1), 31-40. doi:10.1016/j.jneuroling.2013.08.001.

    Abstract

    Not unlike the tonal system in Chinese, Japanese habitually attaches pitch accents to the production of words. However, in contrast to Chinese, few homophonic word-pairs are really distinguished by pitch accents (Shibata & Shibata, 1990). This predicts that pitch accent plays a small role in lexical selection for Japanese language comprehension. The present study investigated whether native Japanese speakers necessarily use pitch accent in the processing of accent-contrasted homophonic pairs (e.g., ame [LH] for 'candy' and ame [HI] for 'rain') measuring electroencephalographic (EEG) potentials. Electrophysiological evidence (i.e., N400) was obtained when a word was semantically incorrect for a given context but not for incorrectly accented homophones. This suggests that pitch accent indeed plays a minor role when understanding Japanese. (C) 2013 Elsevier Ltd. All rights reserved.
  • Tanner, D., Nicol, J., & Brehm, L. (2014). The time-course of feature interference in agreement comprehension: Multiple mechanisms and asymmetrical attraction. Journal of Memory and Language, 76, 195-215. doi:10.1016/j.jml.2014.07.003.

    Abstract

    Attraction interference in language comprehension and production may be as a result of common or different processes. In the present paper, we investigate attraction interference during language comprehension, focusing on the contexts in which interference arises and the time-course of these effects. Using evidence from event-related brain potentials (ERPs) and sentence judgment times, we show that agreement attraction in comprehension is best explained as morphosyntactic interference during memory retrieval. This stands in contrast to attraction as a message-level process involving the representation of the subject NP's number features, which is a strong contributor to attraction in production. We thus argue that the cognitive antecedents of agreement attraction in comprehension are non-identical with those of attraction in production, and moreover, that attraction in comprehension is primarily a consequence of similarity-based interference in cue-based memory retrieval processes. We suggest that mechanisms responsible for attraction during language comprehension are a subset of those involved in language production.
  • Tarenskeen, S., Broersma, M., & Geurts, B. (2015). Overspecification of color, pattern, and size: Salience, absoluteness, and consistency. Frontiers in Psychology, 6: 1703. doi:10.3389/fpsyg.2015.01703.

    Abstract

    The rates of overspecification of color, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Color and pattern are absolute and salient attributes, whereas size is relative and less salient. Additionally, a tendency toward consistent responses is assessed. Using a within-participants design, we find similar rates of color and pattern overspecification, which are both higher than the rate of size overspecification. Using a between-participants design, however, we find similar rates of pattern and size overspecification, which are both lower than the rate of color overspecification. This indicates that although many speakers are more likely to include color than pattern (probably because color is more salient), they may also treat pattern like color due to a tendency toward consistency. We find no increase in size overspecification when the salience of size is increased, suggesting that speakers are more likely to include absolute than relative attributes. However, we do find an increase in size overspecification when mentioning the attributes is triggered, which again shows that speakers tend to refer in a consistent manner, and that there are circumstances in which even size overspecification is frequently produced.
  • Tekcan, A. I., Yilmaz, E., Kaya Kızılö, B., Karadöller, D. Z., Mutafoğlu, M., & Erciyes, A. (2015). Retrieval and phenomenology of autobiographical memories in blind individuals. Memory, 23(3), 329-339. doi:10.1080/09658211.2014.886702.

    Abstract

    Although visual imagery is argued to be an essential component of autobiographical memory, there have been surprisingly few studies on autobiographical memory processes in blind individuals, who have had no or limited visual input. The purpose of the present study was to investigate how blindness affects retrieval and phenomenology of autobiographical memories. We asked 48 congenital/early blind and 48 sighted participants to recall autobiographical memories in response to six cue words, and to fill out the Autobiographical Memory Questionnaire measuring a number of variables including imagery, belief and recollective experience associated with each memory. Blind participants retrieved fewer memories and reported higher auditory imagery at retrieval than sighted participants. Moreover, within the blind group, participants with total blindness reported higher auditory imagery than those with some light perception. Blind participants also assigned higher importance, belief and recollection ratings to their memories than sighted participants. Importantly, these group differences remained the same for recent as well as childhood memories.
  • Ten Bosch, L., Oostdijk, N., & De Ruiter, J. P. (2004). Turn-taking in social talk dialogues: Temporal, formal and functional aspects. In 9th International Conference Speech and Computer (SPECOM'2004) (pp. 454-461).

    Abstract

    This paper presents a quantitative analysis of the
    turn-taking mechanism evidenced in 93 telephone
    dialogues that were taken from the 9-million-word
    Spoken Dutch Corpus. While the first part of the paper
    focuses on the temporal phenomena of turn taking, such
    as durations of pauses and overlaps of turns in the
    dialogues, the second part explores the discoursefunctional
    aspects of utterances in a subset of 8
    dialogues that were annotated especially for this
    purpose. The results show that speakers adapt their turntaking
    behaviour to the interlocutor’s behaviour.
    Furthermore, the results indicate that male-male dialogs
    show a higher proportion of overlapping turns than
    female-female dialogues.
  • Ten Bosch, L., Mulder, K., & Boves, L. (2019). Phase synchronization between EEG signals as a function of differences between stimuli characteristics. In Proceedings of Interspeech 2019 (pp. 1213-1217). doi:10.21437/Interspeech.2019-2443.

    Abstract

    The neural processing of speech leads to specific patterns in the brain which can be measured as, e.g., EEG signals. When properly aligned with the speech input and averaged over many tokens, the Event Related Potential (ERP) signal is able to differentiate specific contrasts between speech signals. Well-known effects relate to the difference between expected and unexpected words, in particular in the N400, while effects in N100 and P200 are related to attention and acoustic onset effects. Most EEG studies deal with the amplitude of EEG signals over time, sidestepping the effect of phase and phase synchronization. This paper investigates the relation between phase in the EEG signals measured in an auditory lexical decision task by Dutch participants listening to full and reduced English word forms. We show that phase synchronization takes place across stimulus conditions, and that the so-called circular variance is narrowly related to the type of contrast between stimuli.
  • Ten Bosch, L., Ernestus, M., & Boves, L. (2014). Comparing reaction time sequences from human participants and computational models. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 462-466).

    Abstract

    This paper addresses the question how to compare reaction times computed by a computational model of speech comprehension with observed reaction times by participants. The question is based on the observation that reaction time sequences substantially differ per participant, which raises the issue of how exactly the model is to be assessed. Part of the variation in reaction time sequences is caused by the so-called local speed: the current reaction time correlates to some extent with a number of previous reaction times, due to slowly varying variations in attention, fatigue etc. This paper proposes a method, based on time series analysis, to filter the observed reaction times in order to separate the local speed effects. Results show that after such filtering the between-participant correlations increase as well as the average correlation between participant and model increases. The presented technique provides insights into relevant aspects that are to be taken into account when comparing reaction time sequences
  • Ten Bosch, L., Boves, L., & Ernestus, M. (2015). DIANA, an end-to-end computational model of human word comprehension. In Scottish consortium for ICPhS, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    This paper presents DIANA, a new computational model of human speech processing. It is the first model that simulates the complete processing chain from the on-line processing of an acoustic signal to the execution of a response, including reaction times. Moreover it assumes minimal modularity. DIANA consists of three components. The activation component computes a probabilistic match between the input acoustic signal and representations in DIANA’s lexicon, resulting in a list of word hypotheses changing over time as the input unfolds. The decision component operates on this list and selects a word as soon as sufficient evidence is available. Finally, the execution component accounts for the time to execute a behavioral action. We show that DIANA well simulates the average participant in a word recognition experiment.
  • Ten Bosch, L., Boves, L., Tucker, B., & Ernestus, M. (2015). DIANA: Towards computational modeling reaction times in lexical decision in North American English. In Proceedings of Interspeech 2015: The 16th Annual Conference of the International Speech Communication Association (pp. 1576-1580).

    Abstract

    DIANA is an end-to-end computational model of speech processing, which takes as input the speech signal, and provides as output the orthographic transcription of the stimulus, a word/non-word judgment and the associated estimated reaction time. So far, the model has only been tested for Dutch. In this paper, we extend DIANA such that it can also process North American English. The model is tested by having it simulate human participants in a large scale North American English lexical decision experiment. The simulations show that DIANA can adequately approximate the reaction times of an average participant (r = 0.45). In addition, they indicate that DIANA does not yet adequately model the cognitive processes that take place after stimulus offset.
  • Ten Bosch, L., Oostdijk, N., & De Ruiter, J. P. (2004). Durational aspects of turn-taking in spontaneous face-to-face and telephone dialogues. In P. Sojka, I. Kopecek, & K. Pala (Eds.), Text, Speech and Dialogue: Proceedings of the 7th International Conference TSD 2004 (pp. 563-570). Heidelberg: Springer.

    Abstract

    On the basis of two-speaker spontaneous conversations, it is shown that the distributions of both pauses and speech-overlaps of telephone and faceto-face dialogues have different statistical properties. Pauses in a face-to-face
    dialogue last up to 4 times longer than pauses in telephone conversations in functionally comparable conditions. There is a high correlation (0.88 or larger) between the average pause duration for the two speakers across face-to-face
    dialogues and telephone dialogues. The data provided form a first quantitative analysis of the complex turn-taking mechanism evidenced in the dialogues available in the 9-million-word Spoken Dutch Corpus.
  • Ten Oever, S., Van Atteveldt, N., & Sack, A. T. (2015). Increased stimulus expectancy triggers low-frequency phase reset during restricted vigilance. Journal of Cognitive Neuroscience, 27(9), 1811-1822. doi:10.1162/jocn_a_00820.

    Abstract

    Temporal cues can be used to selectively attend to relevant information during abundant sensory stimulation. However, such cues differ vastly in the accuracy of their temporal estimates, ranging from very predictable to very unpredictable. When cues are strongly predictable, attention may facilitate selective processing by aligning relevant incoming information to high neuronal excitability phases of ongoing low-frequency oscillations. However, top-down effects on ongoing oscillations when temporal cues have some predictability, but also contain temporal uncertainties, are unknown. Here, we experimentally created such a situation of mixed predictability and uncertainty: A target could occur within a limited time window after cue but was always unpredictable in exact timing. Crucially to assess top-down effects in such a mixed situation, we manipulated target probability. High target likelihood, compared with low likelihood, enhanced delta oscillations more strongly as measured by evoked power and intertrial coherence. Moreover, delta phase modulated detection rates for probable targets. The delta frequency range corresponds with half-a-period to the target occurrence window and therefore suggests that low-frequency phase reset is engaged to produce a long window of high excitability when event timing is uncertain within a restricted temporal window.
  • Ten Oever, S., & Sack, A. T. (2019). Interactions between rhythmic and feature predictions to create parallel time-content associations. Frontiers in Neuroscience, 13: 791. doi:10.3389/fnins.2019.00791.

    Abstract

    The brain is inherently proactive, constantly predicting the when (moment) and what (content) of future input in order to optimize information processing. Previous research on such predictions has mainly studied the “when” or “what” domain separately, missing to investigate the potential integration of both types of predictive information. In the absence of such integration, temporal cues are assumed to enhance any upcoming content at the predicted moment in time (general temporal predictor). However, if the when and what prediction domain were integrated, a much more flexible neural mechanism may be proposed in which temporal-feature interactions would allow for the creation of multiple concurrent time-content predictions (parallel time-content predictor). Here, we used a temporal association paradigm in two experiments in which sound identity was systematically paired with a specific time delay after the offset of a rhythmic visual input stream. In Experiment 1, we revealed that participants associated the time delay of presentation with the identity of the sound. In Experiment 2, we unexpectedly found that the strength of this temporal association was negatively related to the EEG steady-state evoked responses (SSVEP) in preceding trials, showing that after high neuronal responses participants responded inconsistent with the time-content associations, similar to adaptation mechanisms. In this experiment, time-content associations were only present for low SSVEP responses in previous trials. These results tentatively show that it is possible to represent multiple time-content paired predictions in parallel, however, future research is needed to investigate this interaction further.
  • Ten Oever, S., & Sack, A. T. (2015). Oscillatory phase shapes syllable perception. Proceedings of the National Academy of Sciences of the United States of America, 112(52), 15833-15837. doi:10.1073/pnas.1517519112.

    Abstract

    The role of oscillatory phase for perceptual and cognitive processes is being increasingly acknowledged. To date, little is known about the direct role of phase in categorical perception. Here we show in two separate experiments that the identification of ambiguous syllables that can either be perceived as / da/ or / ga/ is biased by the underlying oscillatory phase as measured with EEG and sensory entrainment to rhythmic stimuli. The measured phase difference in which perception is biased toward / da/ or / ga/ exactly matched the different temporal onset delays in natural audiovisual speech between mouth movements and speech sounds, which last 80 ms longer for / ga/ than for / da/. These results indicate the functional relationship between prestimulus phase and syllable identification, and signify that the origin of this phase relationship could lie in exposure and subsequent learning of unique audiovisual temporal onset differences.
  • Ten Oever, S., Schroeder, C. E., Poeppel, D., Van Atteveldt, N., & Zion-Golumbic, E. (2014). Rhythmicity and cross-modal temporal cues facilitate detection. Neuropsychologia, 63, 43-50. doi:10.1016/j.neuropsychologia.2014.08.008.

    Abstract

    Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we investigated the influence of two types of predictive temporal information on the perception of near-threshold auditory stimuli: 1) intrinsic temporal rhythmicity within an auditory stimulus stream and 2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality should decrease the threshold at which sounds are detected, beyond the advantage provided by each information source alone. Two experiments were conducted in which participants had to detect tones in noise. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual signal in half of the trials. We show that detection intensities are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, independent from response bias, and that this effect is even greater for rhythmic audiovisual presentation. These results suggest that both types of temporal information are used to optimally process sounds that occur at expected points in time (resulting in enhanced detection), and that multiple temporal cues are combined to improve temporal estimates. Our findings underscore the flexibility and proactivity of the perceptual system which uses within- and across-modality temporal cues to anticipate upcoming events and process them optimally. (C) 2014 Elsevier Ltd. All rights reserved.
  • Ter Bekke, M., Ozyurek, A., & Ünal, E. (2019). Speaking but not gesturing predicts motion event memory within and across languages. In A. Goel, C. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2940-2946). Montreal, QB: Cognitive Science Society.

    Abstract

    In everyday life, people see, describe and remember motion events. We tested whether the type of motion event information (path or manner) encoded in speech and gesture predicts which information is remembered and if this varies across speakers of typologically different languages. We focus on intransitive motion events (e.g., a woman running to a tree) that are described differently in speech and co-speech gesture across languages, based on how these languages typologically encode manner and path information (Kita & Özyürek, 2003; Talmy, 1985). Speakers of Dutch (n = 19) and Turkish (n = 22) watched and described motion events. With a surprise (i.e. unexpected) recognition memory task, memory for manner and path components of these events was measured. Neither Dutch nor Turkish speakers’ memory for manner went above chance levels. However, we found a positive relation between path speech and path change detection: participants who described the path during encoding were more accurate at detecting changes to the path of an event during the memory task. In addition, the relation between path speech and path memory changed with native language: for Dutch speakers encoding path in speech was related to improved path memory, but for Turkish speakers no such relation existed. For both languages, co-speech gesture did not predict memory speakers. We discuss the implications of these findings for our understanding of the relations between speech, gesture, type of encoding in language and memory.
  • Terband, H., Rodd, J., & Maas, E. (2015). Simulations of feedforward and feedback control in apraxia of speech (AOS): Effects of noise masking on vowel production in the DIVA model. In M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahan, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015).

    Abstract

    Apraxia of Speech (AOS) is a motor speech disorder whose precise nature is still poorly understood. A recent behavioural experiment featuring a noise masking paradigm suggests that AOS reflects a disruption of feedforward control, whereas feedback control is spared and plays a more prominent role in achieving and maintaining segmental contrasts [10]. In the present study, we set out to validate the interpretation of AOS as a feedforward impairment by means of a series of computational simulations with the DIVA model [6, 7] mimicking the behavioural experiment. Simulation results showed a larger reduction in vowel spacing and a smaller vowel dispersion in the masking condition compared to the no-masking condition for the simulated feedforward deficit, whereas the other groups showed an opposite pattern. These results mimic the patterns observed in the human data, corroborating the notion that AOS can be conceptualized as a deficit in feedforward control
  • Terrill, A. (1998). Biri. München: Lincom Europa.

    Abstract

    This work presents a salvage grammar of the Biri language of Eastern Central Queensland, a Pama-Nyungan language belonging to the large Maric subgroup. As the language is no longer used, the grammatical description is based on old written sources and on recordings made by linguists in the 1960s and 1970s. Biri is in many ways typical of the Pama-Nyungan languages of Southern Queensland. It has split case marking systems, marking nouns according to an ergative/absolutive system and pronouns according to a nominative/accusative system. Unusually for its area, Biri also has bound pronouns on its verb, cross-referencing the person, number and case of core participants. As far as it is possible, the grammatical discussion is ‘theory neutral’. The first four chapters deal with the phonology, morphology, and syntax of the language. The last two chapters contain a substantial discussion of Biri’s place in the Pama-Nyungan family. In chapter 6 the numerous dialects of the Biri language are discussed. In chapter 7 the close linguistic relationship between Biri and the surrounding languages is examined.
  • Terrill, A. (2004). Coordination in Lavukaleve. In M. Haspelmath (Ed.), Coordinating Constructions. (pp. 427-443). Amsterdam: John Benjamins.

Share this page