Publications

Displaying 1401 - 1500 of 1735
  • Shayan, S., Ozturk, O., & Sicoli, M. A. (2011). The thickness of pitch: Crossmodal metaphors in Farsi, Turkish and Zapotec. The Senses & Society, 6(1), 96-105. doi:10.2752/174589311X12893982233911.

    Abstract

    Speakers use vocabulary for spatial verticality and size to describe pitch. A high–low contrast is common to many languages, but others show contrasts like thick–thin and big–small. We consider uses of thick for low pitch and thin for high pitch in three languages: Farsi, Turkish, and Zapotec. We ask how metaphors for pitch structure the sound space. In a language like English, high applies to both high-pitched as well as high-amplitude (loud) sounds; low applies to low-pitched as well as low-amplitude (quiet) sounds. Farsi, Turkish, and Zapotec organize sound in a different way. Thin applies to high pitch and low amplitude and thick to low pitch and high amplitude. We claim that these metaphors have their sources in life experiences. Musical instruments show co-occurrences of higher pitch with thinner, smaller objects and lower pitch with thicker, larger objects. On the other hand bodily experience can ground the high–low metaphor. A raised larynx produces higher pitch and lowered larynx lower pitch. Low-pitched sounds resonate the chest, a lower place than highpitched sounds. While both patterns are available from life experience, linguistic experience privileges one over the other, which results in differential structuring of the multiple dimensions of sound.
  • Shkaravska, O., & Van Eekelen, M. (2014). Univariate polynomial solutions of algebraic difference equations. Journal of Symbolic Computation, 60, 15-28. doi:10.1016/j.jsc.2013.10.010.

    Abstract

    Contrary to linear difference equations, there is no general theory of difference equations of the form G(P(x−τ1),…,P(x−τs))+G0(x)=0, with τi∈K, G(x1,…,xs)∈K[x1,…,xs] of total degree D⩾2 and G0(x)∈K[x], where K is a field of characteristic zero. This article is concerned with the following problem: given τi, G and G0, find an upper bound on the degree d of a polynomial solution P(x), if it exists. In the presented approach the problem is reduced to constructing a univariate polynomial for which d is a root. The authors formulate a sufficient condition under which such a polynomial exists. Using this condition, they give an effective bound on d, for instance, for all difference equations of the form G(P(x−a),P(x−a−1),P(x−a−2))+G0(x)=0 with quadratic G, and all difference equations of the form G(P(x),P(x−τ))+G0(x)=0 with G having an arbitrary degree.
  • Sicoli, M. A. (2011). Agency and ideology in language shift and language maintenance. In T. Granadillo, & H. A. Orcutt-Gachiri (Eds.), Ethnographic contributions to the study of endangered languages (pp. 161-176). Tucson, AZ: University of Arizona Press.
  • Sicoli, M. A., Stivers, T., Enfield, N. J., & Levinson, S. C. (2015). Marked initial pitch in questions signals marked communicative function. Language and Speech, 58(2), 204-223. doi:10.1177/0023830914529247.

    Abstract

    In conversation, the initial pitch of an utterance can provide an early phonetic cue of the communicative function, the speech act, or the social action being implemented. We conducted quantitative acoustic measurements and statistical analyses of pitch in over 10,000 utterances, including 2512 questions, their responses, and about 5000 other utterances by 180 total speakers from a corpus of 70 natural conversations in 10 languages. We measured pitch at first prominence in a speaker’s utterance and discriminated utterances by language, speaker, gender, question form, and what social action is achieved by the speaker’s turn. Through applying multivariate logistic regression we found that initial pitch that significantly deviated from the speaker’s median pitch level was predictive of the social action of the question. In questions designed to solicit agreement with an evaluation rather than information, pitch was divergent from a speaker’s median predictably in the top 10% of a speakers range. This latter finding reveals a kind of iconicity in the relationship between prosody and social action in which a marked pitch correlates with a marked social action. Thus, we argue that speakers rely on pitch to provide an early signal for recipients that the question is not to be interpreted through its literal semantics but rather through an inference.
  • Sicoli, M. A. (2010). Shifting voices with participant roles: Voice qualities and speech registers in Mesoamerica. Language in Society, 39(4), 521-553. doi:10.1017/S0047404510000436.

    Abstract

    Although an increasing number of sociolinguistic researchers consider functions of voice qualities as stylistic features, few studies consider cases where voice qualities serve as the primary signs of speech registers. This article addresses this gap through the presentation of a case study of Lachixio Zapotec speech registers indexed though falsetto, breathy, creaky, modal, and whispered voice qualities. I describe the system of contrastive speech registers in Lachixio Zapotec and then track a speaker on a single evening where she switches between three of these registers. Analyzing line-by-line conversational structure I show both obligatory and creative shifts between registers that co-occur with shifts in the participant structures of the situated social interactions. I then examine similar uses of voice qualities in other Zapotec languages and in the two unrelated language families Nahuatl and Mayan to suggest the possibility that such voice registers are a feature of the Mesoamerican culture area.
  • Sicoli, M. A., Majid, A., & Levinson, S. C. (2009). The language of sound: II. In A. Majid (Ed.), Field manual volume 12 (pp. 14-19). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.446294.

    Abstract

    The task is designed to elicit vocabulary for simple sounds. The primary goal is to establish how people describe sound and what resources the language provides generally for encoding this domain. More specifically: (1) whether there is dedicated vocabulary for encoding simple sound contrasts and (2) how much consistency there is within a community in descriptions. This develops on materials used in The language of sound
  • Sidnell, J., & Enfield, N. J. (2014). Deixis and the interactional foundations of reference. In Y. Huang (Ed.), The Oxford handbook of pragmatics.
  • Sidnell, J., Kockelman, P., & Enfield, N. J. (2014). Community and social life. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 481-483). Cambridge: Cambridge University Press.
  • Sidnell, J., Enfield, N. J., & Kockelman, P. (2014). Interaction and intersubjectivity. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 343-345). Cambridge: Cambridge University Press.
  • Sidnell, J., & Enfield, N. J. (2014). The ontology of action, in interaction. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 423-446). Cambridge: Cambridge University Press.
  • Silva, S., Branco, P., Barbosa, F., Marques-Teixeira, J., Petersson, K. M., & Castro, S. L. (2014). Musical phrase boundaries, wrap-up and the closure positive shift. Brain Research, 1585, 99-107. doi:10.1016/j.brainres.2014.08.025.

    Abstract

    We investigated global integration (wrap-up) processes at the boundaries of musical phrases by comparing the effects of well and non-well formed phrases on event-related potentials time-locked to two boundary points: the onset and the offset of the boundary pause. The Closure Positive Shift, which is elicited at the boundary offset, was not modulated by the quality of phrase structure (well vs. non-well formed). In contrast, the boundary onset potentials showed different patterns for well and non-well formed phrases. Our results contribute to specify the functional meaning of the Closure Positive Shift in music, shed light on the large-scale structural integration of musical input, and raise new hypotheses concerning shared resources between music and language.
  • Silva, S., Barbosa, F., Marques-Teixeira, J., Petersson, K. M., & Castro, S. L. (2014). You know when: Event-related potentials and theta/beat power indicate boundary prediction in music. Journal of Integrative Neuroscience, 13(1), 19-34. doi:10.1142/S0219635214500022.

    Abstract

    Neuroscientific and musicological approaches to music cognition indicate that listeners familiarized in the Western tonal tradition expect a musical phrase boundary at predictable time intervals. However, phrase boundary prediction processes in music remain untested. We analyzed event-related potentials (ERPs) and event-related induced power changes at the onset and offset of a boundary pause. We made comparisons with modified melodies, where the pause was omitted and filled by tones. The offset of the pause elicited a closure positive shift (CPS), indexing phrase boundary detection. The onset of the filling tones elicited significant increases in theta and beta powers. In addition, the P2 component was larger when the filling tones started than when they ended. The responses to boundary omission suggest that listeners expected to hear a boundary pause. Therefore, boundary prediction seems to coexist with boundary detection in music segmentation.
  • Simanova, I., Van Gerven, M., Oostenveld, R., & Hagoort, P. (2010). Identifying object categories from event-related EEG: Toward decoding of conceptual representations. Plos One, 5(12), E14465. doi:10.1371/journal.pone.0014465.

    Abstract

    Multivariate pattern analysis is a technique that allows the decoding of conceptual information such as the semantic category of a perceived object from neuroimaging data. Impressive single-trial classification results have been reported in studies that used fMRI. Here, we investigate the possibility to identify conceptual representations from event-related EEG based on the presentation of an object in different modalities: its spoken name, its visual representation and its written name. We used Bayesian logistic regression with a multivariate Laplace prior for classification. Marked differences in classification performance were observed for the tested modalities. Highest accuracies (89% correctly classified trials) were attained when classifying object drawings. In auditory and orthographical modalities, results were lower though still significant for some subjects. The employed classification method allowed for a precise temporal localization of the features that contributed to the performance of the classifier for three modalities. These findings could help to further understand the mechanisms underlying conceptual representations. The study also provides a first step towards the use of concept decoding in the context of real-time brain-computer interface applications.
  • Simanova, I., Hagoort, P., Oostenveld, R., & Van Gerven, M. A. J. (2014). Modality-independent decoding of semantic information from the human brain. Cerebral Cortex, 24, 426-434. doi:10.1093/cercor/bhs324.

    Abstract

    An ability to decode semantic information from fMRI spatial patterns has been demonstrated in previous studies mostly for 1 specific input modality. In this study, we aimed to decode semantic category independent of the modality in which an object was presented. Using a searchlight method, we were able to predict the stimulus category from the data while participants performed a semantic categorization task with 4 stimulus modalities (spoken and written names, photographs, and natural sounds). Significant classification performance was achieved in all 4 modalities. Modality-independent decoding was implemented by training and testing the searchlight method across modalities. This allowed the localization of those brain regions, which correctly discriminated between the categories, independent of stimulus modality. The analysis revealed large clusters of voxels in the left inferior temporal cortex and in frontal regions. These voxels also allowed category discrimination in a free recall session where subjects recalled the objects in the absence of external stimuli. The results show that semantic information can be decoded from the fMRI signal independently of the input modality and have clear implications for understanding the functional mechanisms of semantic memory.
  • Simanova, I., Van Gerven, M. A., Oostenveld, R., & Hagoort, P. (2015). Predicting the semantic category of internally generated words from neuromagnetic recordings. Journal of Cognitive Neuroscience, 27(1), 35-45. doi:10.1162/jocn_a_00690.

    Abstract

    In this study, we explore the possibility to predict the semantic category of words from brain signals in a free word generation task. Participants produced single words from different semantic categories in a modified semantic fluency task. A Bayesian logistic regression classifier was trained to predict the semantic category of words from single-trial MEG data. Significant classification accuracies were achieved using sensor-level MEG time series at the time interval of conceptual preparation. Semantic category prediction was also possible using source-reconstructed time series, based on minimum norm estimates of cortical activity. Brain regions that contributed most to classification on the source level were identified. These were the left inferior frontal gyrus, left middle frontal gyrus, and left posterior middle temporal gyrus. Additionally, the temporal dynamics of brain activity underlying the semantic preparation during word generation was explored. These results provide important insights about central aspects of language production
  • Simon, E., & Sjerps, M. J. (2014). Developing non-native vowel representations: a study on child second language acquisition. COPAL: Concordia Working Papers in Applied Linguistics, 5, 693-708.

    Abstract

    This study examines what stage 9‐12‐year‐old Dutch‐speaking children have reached in the development of their L2 lexicon, focusing on its phonological specificity. Two experiments were carried out with a group of Dutch‐speaking children and adults learning English. In a first task, listeners were asked to judge Dutch words which were presented with either the target Dutch vowel or with an English vowel synthetically inserted. The second experiment was a mirror of the first, i.e. with English words and English or Dutch vowels inserted. It was examined to what extent the listeners accepted substitutions of Dutch vowels by English ones, and vice versa. The results of the experiments suggest that the children have not reached the same degree of phonological specificity of L2 words as the adults. Children not only experience a strong influence of their native vowel categories when listening to L2 words, they also apply less strict criteria.
  • Simon, E., Sjerps, M. J., & Fikkert, P. (2014). Phonological representations in children’s native and non-native lexicon. Bilingualism: Language and Cognition, 17(1), 3-21. doi:10.1017/S1366728912000764.

    Abstract

    This study investigated the phonological representations of vowels in children's native and non-native lexicons. Two experiments were mispronunciation tasks (i.e., a vowel in words was substituted by another vowel from the same language). These were carried out by Dutch-speaking 9–12-year-old children and Dutch-speaking adults, in their native (Experiment 1, Dutch) and non-native (Experiment 2, English) language. A third experiment tested vowel discrimination. In Dutch, both children and adults could accurately detect mispronunciations. In English, adults, and especially children, detected substitutions of native vowels (i.e., vowels that are present in the Dutch inventory) by non-native vowels more easily than changes in the opposite direction. Experiment 3 revealed that children could accurately discriminate most of the vowels. The results indicate that children's L1 categories strongly influenced their perception of English words. However, the data also reveal a hint of the development of L2 phoneme categories.

    Additional information

    Simon_SuppMaterial.pdf
  • Simon-Thomas, E. R., Keltner, D. J., Sauter, D., Sinicropi-Yao, L., & Abramson, A. (2009). The voice conveys specific emotions: Evidence from vocal burst displays. Emotion, 9, 838-846. doi:10.1037/a0017810.

    Abstract

    Studies of emotion signaling inform claims about the taxonomic structure, evolutionary origins, and physiological correlates of emotions. Emotion vocalization research has tended to focus on a limited set of emotions: anger, disgust, fear, sadness, surprise, happiness, and for the voice, also tenderness. Here, we examine how well brief vocal bursts can communicate 22 different emotions: 9 negative (Study 1) and 13 positive (Study 2), and whether prototypical vocal bursts convey emotions more reliably than heterogeneous vocal bursts (Study 3). Results show that vocal bursts communicate emotions like anger, fear, and sadness, as well as seldom-studied states like awe, compassion, interest, and embarrassment. Ancillary analyses reveal family-wise patterns of vocal burst expression. Errors in classification were more common within emotion families (e.g., ‘self-conscious,’ ‘pro-social’) than between emotion families. The three studies reported highlight the voice as a rich modality for emotion display that can inform fundamental constructs about emotion.
  • Simpson, N. H., Ceroni, F., Reader, R. H., Covill, L. E., Knight, J. C., the SLI Consortium, Hennessy, E. R., Bolton, P. F., Conti-Ramsden, G., O’Hare, A., Baird, G., Fisher, S. E., & Newbury, D. F. (2015). Genome-wide analysis identifies a role for common copy number variants in specific language impairment. European Journal of Human Genetics, 23, 1370-1377. doi:10.1038/ejhg.2014.296.

    Abstract

    An exploratory genome-wide copy number variant (CNV) study was performed in 127 independent cases with specific language impairment (SLI), their first-degree relatives (385 individuals) and 269 population controls. Language-impaired cases showed an increased CNV burden in terms of the average number of events (11.28 vs 10.01, empirical P=0.003), the total length of CNVs (717 vs 513 Kb, empirical P=0.0001), the average CNV size (63.75 vs 51.6 Kb, empirical P=0.0005) and the number of genes spanned (14.29 vs 10.34, empirical P=0.0007) when compared with population controls, suggesting that CNVs may contribute to SLI risk. A similar trend was observed in first-degree relatives regardless of affection status. The increased burden found in our study was not driven by large or de novo events, which have been described as causative in other neurodevelopmental disorders. Nevertheless, de novo CNVs might be important on a case-by-case basis, as indicated by identification of events affecting relevant genes, such as ACTR2 and CSNK1A1, and small events within known micro-deletion/-duplication syndrome regions, such as chr8p23.1. Pathway analysis of the genes present within the CNVs of the independent cases identified significant overrepresentation of acetylcholine binding, cyclic-nucleotide phosphodiesterase activity and MHC proteins as compared with controls. Taken together, our data suggest that the majority of the risk conferred by CNVs in SLI is via common, inherited events within a ‘common disorder–common variant’ model. Therefore the risk conferred by CNVs will depend upon the combination of events inherited (both CNVs and SNPs), the genetic background of the individual and the environmental factors.

    Additional information

    ejhg2014296x1.pdf ejhg2014296x2.pdf
  • Simpson, N. H., Addis, L., Brandler, W. M., Slonims, V., Clark, A., Watson, J., Scerri, T. S., Hennessy, E. R., Stein, J., Talcott, J., Conti-Ramsden, G., O'Hare, A., Baird, G., Fairfax, B. P., Knight, J. C., Paracchini, S., Fisher, S. E., Newbury, D. F., & The SLI Consortium (2014). Increased prevalence of sex chromosome aneuploidies in specific language impairment and dyslexia. Developmental Medicine and Child Neurology, 56, 346-353. doi:10.1111/dmcn.12294.

    Abstract

    Aim Sex chromosome aneuploidies increase the risk of spoken or written language disorders but individuals with specific language impairment (SLI) or dyslexia do not routinely undergo cytogenetic analysis. We assess the frequency of sex chromosome aneuploidies in individuals with language impairment or dyslexia. Method Genome-wide single nucleotide polymorphism genotyping was performed in three sample sets: a clinical cohort of individuals with speech and language deficits (87 probands: 61 males, 26 females; age range 4 to 23 years), a replication cohort of individuals with SLI, from both clinical and epidemiological samples (209 probands: 139 males, 70 females; age range 4 to 17 years), and a set of individuals with dyslexia (314 probands: 224 males, 90 females; age range 7 to 18 years). Results In the clinical language-impaired cohort, three abnormal karyotypic results were identified in probands (proband yield 3.4%). In the SLI replication cohort, six abnormalities were identified providing a consistent proband yield (2.9%). In the sample of individuals with dyslexia, two sex chromosome aneuploidies were found giving a lower proband yield of 0.6%. In total, two XYY, four XXY (Klinefelter syndrome), three XXX, one XO (Turner syndrome), and one unresolved karyotype were identified. Interpretation The frequency of sex chromosome aneuploidies within each of the three cohorts was increased over the expected population frequency (approximately 0.25%) suggesting that genetic testing may prove worthwhile for individuals with language and literacy problems and normal non-verbal IQ. Early detection of these aneuploidies can provide information and direct the appropriate management for individuals.
  • Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2011). Constraints on the processes responsible for the extrinsic normalization of vowels. Attention, Perception & Psychophysics, 73, 1195-1215. doi:10.3758/s13414-011-0096-8.

    Abstract

    Listeners tune in to talkers’ vowels through extrinsic normalization. We asked here whether this process could be based on compensation for the Long Term Average Spectrum (LTAS) of preceding sounds and whether the mechanisms responsible for normalization are indifferent to the nature of those sounds. If so, normalization should apply to nonspeech stimuli. Previous findings were replicated with first formant (F1) manipulations of speech. Targets on a [pIt]-[pEt] (low-high F1) continuum were labeled as [pIt] more after high-F1 than after low-F1 precursors. Spectrally-rotated nonspeech versions of these materials produced similar normalization. None occurred, however, with nonspeech stimuli that were less speech-like, even though precursor-target LTAS relations were equivalent to those used earlier. Additional experiments investigated the roles of pitch movement, amplitude variation, formant location, and the stimuli's perceived similarity to speech. It appears that normalization is not restricted to speech, but that the nature of the preceding sounds does matter. Extrinsic normalization of vowels is due at least in part to an auditory process which may require familiarity with the spectro-temporal characteristics of speech.
  • Sjerps, M. J., & Reinisch, E. (2015). Divide and conquer: How perceptual contrast sensitivity and perceptual learning cooperate in reducing input variation in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 41(3), 710-722. doi:10.1037/a0039028.

    Abstract

    Listeners have to overcome variability of the speech signal that can arise, for example, because of differences in room acoustics, differences in speakers’ vocal tract properties, or idiosyncrasies in pronunciation. Two mechanisms that are involved in resolving such variation are perceptually contrastive effects that arise from surrounding acoustic context and lexically guided perceptual learning. Although both processes have been studied in great detail, little attention has been paid to how they operate relative to each other in speech perception. The present study set out to address this issue. The carrier parts of exposure stimuli of a classical perceptual learning experiment were spectrally filtered such that the acoustically ambiguous final fricatives sounded relatively more like the lexically intended sound (Experiment 1) or the alternative (Experiment 2). Perceptual learning was found only in the latter case. The findings show that perceptual contrast effects precede lexically guided perceptual learning, at least in terms of temporal order, and potentially in terms of cognitive processing levels as well
  • Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2011). Listening to different speakers: On the time-course of perceptual compensation for vocal-tract characteristics. Neuropsychologia, 49, 3831-3846. doi:10.1016/j.neuropsychologia.2011.09.044.

    Abstract

    This study used an active multiple-deviant oddball design to investigate the time-course of normalization processes that help listeners deal with between-speaker variability. Electroencephalograms were recorded while Dutch listeners heard sequences of non-words (standards and occasional deviants). Deviants were [ɪ papu] or [ɛ papu], and the standard was [ɪɛpapu], where [ɪɛ] was a vowel that was ambiguous between [ɛ] and [ɪ]. These sequences were presented in two conditions, which differed with respect to the vocal-tract characteristics (i.e., the average 1st formant frequency) of the [papu] part, but not of the initial vowels [ɪ], [ɛ] or [ɪɛ] (these vowels were thus identical across conditions). Listeners more often detected a shift from [ɪɛpapu] to [ɛ papu] than from [ɪɛpapu] to [ɪ papu] in the high F1 context condition; the reverse was true in the low F1 context condition. This shows that listeners’ perception of vowels differs depending on the speaker‘s vocal-tract characteristics, as revealed in the speech surrounding those vowels. Cortical electrophysiological responses reflected this normalization process as early as about 120 ms after vowel onset, which suggests that shifts in perception precede influences due to conscious biases or decision strategies. Listeners’ abilities to normalize for speaker-vocal-tract properties are for an important part the result of a process that influences representations of speech sounds early in the speech processing stream.
  • Sjerps, M. J., & McQueen, J. M. (2010). The bounds on flexibility in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 36, 195-211. doi:10.1037/a0016803.
  • Sjerps, M. J., & Meyer, A. S. (2015). Variation in dual-task performance reveals late initiation of speech planning in turn-taking. Cognition, 136, 304-324. doi:10.1016/j.cognition.2014.10.008.

    Abstract

    The smooth transitions between turns in natural conversation suggest that speakers often begin to plan their utterances while listening to their interlocutor. The presented study investigates whether this is indeed the case and, if so, when utterance planning begins. Two hypotheses were contrasted: that speakers begin to plan their turn as soon as possible (in our experiments less than a second after the onset of the interlocutor’s turn), or that they do so close to the end of the interlocutor’s turn. Turn-taking was combined with a finger tapping task to measure variations in cognitive load. We assumed that the onset of speech planning in addition to listening would be accompanied by deterioration in tapping performance. Two picture description experiments were conducted. In both experiments there were three conditions: (1) Tapping and Speaking, where participants tapped a complex pattern while taking over turns from a pre-recorded speaker, (2) Tapping and Listening, where participants carried out the tapping task while overhearing two pre-recorded speakers, and (3) Speaking Only, where participants took over turns as in the Tapping and Speaking condition but without tapping. The experiments differed in the amount of tapping training the participants received at the beginning of the session. In Experiment 2, the participants’ eye-movements were recorded in addition to their speech and tapping. Analyses of the participants’ tapping performance and eye movements showed that they initiated the cognitively demanding aspects of speech planning only shortly before the end of the turn of the preceding speaker. We argue that this is a smart planning strategy, which may be the speakers’ default in many everyday situations.
  • Skiba, R. (2010). Polnisch. In S. Colombo-Scheffold, P. Fenn, S. Jeuk, & J. Schäfer (Eds.), Ausländisch für Deutsche. Sprachen der Kinder - Sprachen im Klassenzimmer (2. korrigierte und erweiterte Auflage, pp. 165-176). Freiburg: Fillibach.
  • Skoruppa, K., Cristia, A., Peperkamp, S., & Seidl, A. (2011). English-learning infants' perception of word stress patterns [JASA Express Letter]. Journal of the Acoustical Society of America, 130(1), EL50-EL55. doi:10.1121/1.3590169.

    Abstract

    Adult speakers of different free stress languages (e.g., English, Spanish) differ both in their sensitivity to lexical stress and in their processing of suprasegmental and vowel quality cues to stress. In a head-turn preference experiment with a familiarization phase, both 8-month-old and 12-month-old English-learning infants discriminated between initial stress and final stress among lists of Spanish-spoken disyllabic nonwords that were segmentally varied (e.g. [ˈnila, ˈtuli] vs [luˈta, puˈki]). This is evidence that English-learning infants are sensitive to lexical stress patterns, instantiated primarily by suprasegmental cues, during the second half of the first year of life.
  • Sleegers, K., Bettens, K., De Roeck, A., Van Cauwenberghe, C., Cuyvers, E., Verheijen, J., Struyfs, H., Van Dongen, J., Vermeulen, S., Engelborghs, S., Vandenbulcke, M., Vandenberghe, R., De Deyn, P., Van Broeckhoven, C., & BELNEU consortium (2015). A 22-single nucleotide polymorphism Alzheimer's disease risk score correlates with family history, onset age, and cerebrospinal fluid Aβ42. Alzheimer's & Dementia, 11(12), 1452-1460. doi:10.1016/j.jalz.2015.02.013.

    Abstract

    Introduction The ability to identify individuals at increased genetic risk for Alzheimer's disease (AD) may streamline biomarker and drug trials and aid clinical and personal decision making. Methods We evaluated the discriminative ability of a genetic risk score (GRS) covering 22 published genetic risk loci for AD in 1162 Flanders-Belgian AD patients and 1019 controls and assessed correlations with family history, onset age, and cerebrospinal fluid (CSF) biomarkers (Aβ1-42, T-Tau, P-Tau181P). Results A GRS including all single nucleotide polymorphisms (SNPs) and age-specific APOE ε4 weights reached area under the curve (AUC) 0.70, which increased to AUC 0.78 for patients with familial predisposition. Risk of AD increased with GRS (odds ratio, 2.32 (95% confidence interval 2.08-2.58 per unit; P < 1.0e-15). Onset age and CSF Aβ1-42 decreased with increasing GRS (Ponset-age = 9.0e-11; PAβ = 8.9e-7). Discussion The discriminative ability of this 22-SNP GRS is still limited, but these data illustrate that incorporation of age-specific weights improves discriminative ability. GRS-phenotype correlations highlight the feasibility of identifying individuals at highest susceptibility. © 2015 The Authors.
  • Slobin, D. I., Ibarretxe-Antuñano, I., Kopecka, A., & Majid, A. (2014). Manners of human gait: A crosslinguistic event-naming study. Cognitive Linguistics, 25, 701-741. doi:10.1515/cog-2014-0061.

    Abstract

    Crosslinguistic studies of expressions of motion events have found that Talmy's binary typology of verb-framed and satellite-framed languages is reflected in language use. In particular, Manner of motion is relatively more elaborated in satellite-framed languages (e.g., in narrative, picture description, conversation, translation). The present research builds on previous controlled studies of the domain of human motion by eliciting descriptions of a wide range of manners of walking and running filmed in natural circumstances. Descriptions were elicited from speakers of two satellite-framed languages (English, Polish) and three verb-framed languages (French, Spanish, Basque). The sampling of events in this study resulted in four major semantic clusters for these five languages: walking, running, non-canonical gaits (divided into bounce-and-recoil and syncopated movements), and quadrupedal movement (crawling). Counts of verb types found a broad tendency for satellite-framed languages to show greater lexical diversity, along with substantial within group variation. Going beyond most earlier studies, we also examined extended descriptions of manner of movement, isolating types of manner. The following categories of manner were identified and compared: attitude of actor, rate, effort, posture, and motor patterns of legs and feet. Satellite-framed speakers tended to elaborate expressive manner verbs, whereas verb-framed speakers used modification to add manner to neutral motion verbs
  • Slobin, D. I., Bowerman, M., Brown, P., Eisenbeiss, S., & Narasimhan, B. (2011). Putting things in places: Developmental consequences of linguistic typology. In J. Bohnemeyer, & E. Pederson (Eds.), Event representation in language and cognition (pp. 134-165). New York: Cambridge University Press.

    Abstract

    The concept of 'event' has been posited as an ontological primitive in natural language semantics, yet relatively little research has explored patterns of event encoding. Our study explored how adults and children describe placement events (e.g., putting a book on a table) in a range of different languages (Finnish, English, German, Russian, Hindi, Tzeltal Maya, Spanish, and Turkish). Results show that the eight languages grammatically encode placement events in two main ways (Talmy, 1985, 1991), but further investigation reveals fine-grained crosslinguistic variation within each of the two groups. Children are sensitive to these finer-grained characteristics of the input language at an early age, but only when such features are perceptually salient. Our study demonstrates that a unitary notion of 'event' does not suffice to characterize complex but systematic patterns of event encoding crosslinguistically, and that children are sensitive to multiple influences, including the distributional properties of the target language in constructing these patterns in their own speech.
  • Sloetjes, H. (2014). ELAN: Multimedia annotation application. In J. Durand, U. Gut, & G. Kristoffersen (Eds.), The Oxford Handbook of Corpus Phonology (pp. 305-320). Oxford: Oxford University Press.
  • Small, S. L., Hickok, G., Nusbaum, H. C., Blumstein, S., Coslett, H. B., Dell, G., Hagoort, P., Kutas, M., Marantz, A., Pylkkanen, L., Thompson-Schill, S., Watkins, K., & Wise, R. J. (2011). The neurobiology of language: Two years later [Editorial]. Brain and Language, 116(3), 103-104. doi:10.1016/j.bandl.2011.02.004.
  • Smeets, C. J. L. M., Jezierska, J., Watanabe, H., Duarri, A., Fokkens, M. R., Meijer, M., Zhou, Q., Yakovleva, T., Boddeke, E., den Dunnen, W., van Deursen, J., Bakalkin, G., Kampinga, H. H., van de Sluis, B., & S. Verbeek, D. (2015). Elevated mutant dynorphin A causes Purkinje cell loss and motor dysfunction in spinocerebellar ataxia type 23. Brain, 138(9), 2537-2552. doi:10.1093/brain/awv195.

    Abstract

    Spinocerebellar ataxia type 23 is caused by mutations in PDYN, which encodes the opioid neuropeptide precursor protein, prodynorphin. Prodynorphin is processed into the opioid peptides, α-neoendorphin, and dynorphins A and B, that normally exhibit opioid-receptor mediated actions in pain signalling and addiction. Dynorphin A is likely a mutational hotspot for spinocerebellar ataxia type 23 mutations, and in vitro data suggested that dynorphin A mutations lead to persistently elevated mutant peptide levels that are cytotoxic and may thus play a crucial role in the pathogenesis of spinocerebellar ataxia type 23. To further test this and study spinocerebellar ataxia type 23 in more detail, we generated a mouse carrying the spinocerebellar ataxia type 23 mutation R212W in PDYN. Analysis of peptide levels using a radioimmunoassay shows that these PDYNR212W mice display markedly elevated levels of mutant dynorphin A, which are associated with climber fibre retraction and Purkinje cell loss, visualized with immunohistochemical stainings. The PDYNR212W mice reproduced many of the clinical features of spinocerebellar ataxia type 23, with gait deficits starting at 3 months of age revealed by footprint pattern analysis, and progressive loss of motor coordination and balance at the age of 12 months demonstrated by declining performances on the accelerating Rotarod. The pathologically elevated mutant dynorphin A levels in the cerebellum coincided with transcriptionally dysregulated ionotropic and metabotropic glutamate receptors and glutamate transporters, and altered neuronal excitability. In conclusion, the PDYNR212W mouse is the first animal model of spinocerebellar ataxia type 23 and our work indicates that the elevated mutant dynorphin A peptide levels are likely responsible for the initiation and progression of the disease, affecting glutamatergic signalling, neuronal excitability, and motor performance. Our novel mouse model defines a critical role for opioid neuropeptides in spinocerebellar ataxia, and suggests that restoring the elevated mutant neuropeptide levels can be explored as a therapeutic intervention.
  • Smeets, C. J. L. M., & Verbeek, D. (2014). Review Cerebellar ataxia and functional genomics: Identifying the routes to cerebellar neurodegeneration. Biochimica et Biophysica Acta: BBA, 1842(10), 2030-2038. doi:10.1016/j.bbadis.2014.04.004.

    Abstract

    Cerebellar ataxias are progressive neurodegenerative disorders characterized by atrophy of the cerebellum leading to motor dysfunction, balance problems, and limb and gait ataxia. These include among others, the dominantly inherited spinocerebellar ataxias, recessive cerebellar ataxias such as Friedreich's ataxia, and X-linked cerebellar ataxias. Since all cerebellar ataxias display considerable overlap in their disease phenotypes, common pathological pathways must underlie the selective cerebellar neurodegeneration. Therefore, it is important to identify the molecular mechanisms and routes to neurodegeneration that cause cerebellar ataxia. In this review, we discuss the use of functional genomic approaches including whole-exome sequencing, genome-wide gene expression profiling, miRNA profiling, epigenetic profiling, and genetic modifier screens to reveal the underlying pathogenesis of various cerebellar ataxias. These approaches have resulted in the identification of many disease genes, modifier genes, and biomarkers correlating with specific stages of the disease. This article is part of a Special Issue entitled: From Genome to Function.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Modelling language – vision interactions in the hub and spoke framework. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes: Proceedings of the 13th Neural Computation and Psychology Workshop (NCPW13). (pp. 3-16). Singapore: World Scientific Publishing.

    Abstract

    Multimodal integration is a central characteristic of human cognition. However our understanding of the interaction between modalities and its influence on behaviour is still in its infancy. This paper examines the value of the Hub & Spoke framework (Plaut, 2002; Rogers et al., 2004; Dilkina et al., 2008; 2010) as a tool for exploring multimodal interaction in cognition. We present a Hub and Spoke model of language–vision information interaction and report the model’s ability to replicate a range of phonological, visual and semantic similarity word-level effects reported in the Visual World Paradigm (Cooper, 1974; Tanenhaus et al, 1995). The model provides an explicit connection between the percepts of language and the distribution of eye gaze and demonstrates the scope of the Hub-and-Spoke architectural framework by modelling new aspects of multimodal cognition.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Literacy effects on language and vision: Emergent effects from an amodal shared resource (ASR) computational model. Cognitive Psychology, 75, 28-54. doi:10.1016/j.cogpsych.2014.07.002.

    Abstract

    Learning to read and write requires an individual to connect additional orthographic representations to pre-existing mappings between phonological and semantic representations of words. Past empirical results suggest that the process of learning to read and write (at least in alphabetic languages) elicits changes in the language processing system, by either increasing the cognitive efficiency of mapping between representations associated with a word, or by changing the granularity of phonological processing of spoken language, or through a combination of both. Behavioural effects of literacy have typically been assessed in offline explicit tasks that have addressed only phonological processing. However, a recent eye tracking study compared high and low literate participants on effects of phonology and semantics in processing measured implicitly using eye movements. High literates’ eye movements were more affected by phonological overlap in online speech than low literates, with only subtle differences observed in semantics. We determined whether these effects were due to cognitive efficiency and/or granularity of speech processing in a multimodal model of speech processing – the amodal shared resource model (ASR, Smith, Monaghan, & Huettig, 2013). We found that cognitive efficiency in the model had only a marginal effect on semantic processing and did not affect performance for phonological processing, whereas fine-grained versus coarse-grained phonological representations in the model simulated the high/low literacy effects on phonological processing, suggesting that literacy has a focused effect in changing the grain-size of phonological mappings.
  • Smits, R. (1998). A model for dependencies in phonetic categorization. Proceedings of the 16th International Congress on Acoustics and the 135th Meeting of the Acoustical Society of America, 2005-2006.

    Abstract

    A quantitative model of human categorization behavior is proposed, which can be applied to 4-alternative forced-choice categorization data involving two binary classifications. A number of processing dependencies between the two classifications are explicitly formulated, such as the dependence of the location, orientation, and steepness of the class boundary for one classification on the outcome of the other classification. The significance of various types of dependencies can be tested statistically. Analyses of a data set from the literature shows that interesting dependencies in human speech recognition can be uncovered using the model.
  • Snijders, T. M., Vosse, T., Kempen, G., Van Berkum, J. J. A., Petersson, K. M., & Hagoort, P. (2009). Retrieval and unification of syntactic structure in sentence comprehension: An fMRI study using word-category ambiguity. Cerebral Cortex, 19, 1493-1503. doi:10.1093/cercor/bhn187.

    Abstract

    Sentence comprehension requires the retrieval of single word information from long-term memory, and the integration of this information into multiword representations. The current functional magnetic resonance imaging study explored the hypothesis that the left posterior temporal gyrus supports the retrieval of lexical-syntactic information, whereas left inferior frontal gyrus (LIFG) contributes to syntactic unification. Twenty-eight subjects read sentences and word sequences containing word-category (noun–verb) ambiguous words at critical positions. Regions contributing to the syntactic unification process should show enhanced activation for sentences compared to words, and only within sentences display a larger signal for ambiguous than unambiguous conditions. The posterior LIFG showed exactly this predicted pattern, confirming our hypothesis that LIFG contributes to syntactic unification. The left posterior middle temporal gyrus was activated more for ambiguous than unambiguous conditions (main effect over both sentences and word sequences), as predicted for regions subserving the retrieval of lexical-syntactic information from memory. We conclude that understanding language involves the dynamic interplay between left inferior frontal and left posterior temporal regions.

    Additional information

    suppl1.pdf suppl2_dutch_stimulus.pdf
  • Snijders, T. M., Petersson, K. M., & Hagoort, P. (2010). Effective connectivity of cortical and subcortical regions during unification of sentence structure. NeuroImage, 52, 1633-1644. doi:10.1016/j.neuroimage.2010.05.035.

    Abstract

    In a recent fMRI study we showed that left posterior middle temporal gyrus (LpMTG) subserves the retrieval of a word's lexical-syntactic properties from the mental lexicon (long-term memory), while left posterior inferior frontal gyrus (LpIFG) is involved in unifying (on-line integration of) this information into a sentence structure (Snijders et al., 2009). In addition, the right IFG, right MTG, and the right striatum were involved in the unification process. Here we report results from a psychophysical interactions (PPI) analysis in which we investigated the effective connectivity between LpIFG and LpMTG during unification, and how the right hemisphere areas and the striatum are functionally connected to the unification network. LpIFG and LpMTG both showed enhanced connectivity during the unification process with a region slightly superior to our previously reported LpMTG. Right IFG better predicted right temporal activity when unification processes were more strongly engaged, just as LpIFG better predicted left temporal activity. Furthermore, the striatum showed enhanced coupling to LpIFG and LpMTG during unification. We conclude that bilateral inferior frontal and posterior temporal regions are functionally connected during sentence-level unification. Cortico-subcortical connectivity patterns suggest cooperation between inferior frontal and striatal regions in performing unification operations on lexical-syntactic representations retrieved from LpMTG.
  • Snowdon, C. T., & Cronin, K. A. (2009). Comparative cognition and neuroscience. In G. Berntson, & J. Cacioppo (Eds.), Handbook of neuroscience for the behavioral sciences (pp. 32-55). Hoboken, NJ: Wiley.
  • Snowdon, C. T., Pieper, B. A., Boe, C. Y., Cronin, K. A., Kurian, A. V., & Ziegler, T. E. (2010). Variation in oxytocin is related to variation in affiliative behavior in monogamous, pairbonded tamarins. Hormones and Behavior, 58(4), 614-618. doi:10.1016/j.yhbeh.2010.06.014.

    Abstract

    Oxytocin plays an important role in monogamous pairbonded female voles, but not in polygamous voles. Here we examined a socially monogamous cooperatively breeding primate where both sexes share in parental care and territory defense for within species variation in behavior and female and male oxytocin levels in 14 pairs of cotton-top tamarins (Saguinus oedipus). In order to obtain a stable chronic assessment of hormones and behavior, we observed behavior and collected urinary hormonal samples across the tamarins’ 3-week ovulatory cycle. We found similar levels of urinary oxytocin in both sexes. However, basal urinary oxytocin levels varied 10-fold across pairs and pair-mates displayed similar oxytocin levels. Affiliative behavior (contact, grooming, sex) also varied greatly across the sample and explained more than half the variance in pair oxytocin levels. The variables accounting for variation in oxytocin levels differed by sex. Mutual contact and grooming explained most of the variance in female oxytocin levels, whereas sexual behavior explained most of the variance in male oxytocin levels. The initiation of contact by males and solicitation of sex by females were related to increased levels of oxytocin in both. This study demonstrates within-species variation in oxytocin that is directly related to levels of affiliative and sexual behavior. However, different behavioral mechanisms influence oxytocin levels in males and females and a strong pair relationship (as indexed by high levels of oxytocin) may require the activation of appropriate mechanisms for both sexes.
  • Sonnweber, R., Ravignani, A., & Fitch, W. T. (2015). Non-adjacent visual dependency learning in chimpanzees. Animal Cognition, 18(3), 733-745. doi:10.1007/s10071-015-0840-x.

    Abstract

    Humans have a strong proclivity for structuring and patterning stimuli: Whether in space or time, we tend to mentally order stimuli in our environment and organize them into units with specific types of relationships. A crucial prerequisite for such organization is the cognitive ability to discern and process regularities among multiple stimuli. To investigate the evolutionary roots of this cognitive capacity, we tested chimpanzees—which, along with bonobos, are our closest living relatives—for simple, variable distance dependency processing in visual patterns. We trained chimpanzees to identify pairs of shapes either linked by an arbitrary learned association (arbitrary associative dependency) or a shared feature (same shape, feature-based dependency), and to recognize strings where items related to either of these ways occupied the first (leftmost) and the last (rightmost) item of the stimulus. We then probed the degree to which subjects generalized this pattern to new colors, shapes, and numbers of interspersed items. We found that chimpanzees can learn and generalize both types of dependency rules, indicating that the ability to encode both feature-based and arbitrary associative regularities over variable distances in the visual domain is not a human prerogative. Our results strongly suggest that these core components of human structural processing were already present in our last common ancestor with chimpanzees.

    Additional information

    supplementary material
  • Sonnweber, R. S., Ravignani, A., Stobbe, N., Schiestl, G., Wallner, B., & Fitch, W. T. (2015). Rank‐dependent grooming patterns and cortisol alleviation in Barbary macaques. American Journal of Primatology, 77(6), 688-700. doi:10.1002/ajp.22391.

    Abstract

    Flexibly adapting social behavior to social and environmental challenges helps to alleviate glucocorticoid (GC) levels, which may have positive fitness implications for an individual. For primates, the predominant social behavior is grooming. Giving grooming to others is particularly efficient in terms of GC mitigation. However, grooming is confined by certain limitations such as time constraints or restricted access to other group members. For instance, dominance hierarchies may impact grooming partner availability in primate societies. Consequently specific grooming patterns emerge. In despotic species focusing grooming activity on preferred social partners significantly ameliorates GC levels in females of all ranks. In this study we investigated grooming patterns and GC management in Barbary macaques, a comparably relaxed species. We monitored changes in grooming behavior and cortisol (C) for females of different ranks. Our results show that the C‐amelioration associated with different grooming patterns had a gradual connection with dominance hierarchy: while higher‐ranking individuals showed lowest urinary C measures when they focused their grooming on selected partners within their social network, lower‐ranking individuals expressed lowest C levels when dispersing their grooming activity evenly across their social partners. We argue that the relatively relaxed social style of Barbary macaque societies allows individuals to flexibly adapt grooming patterns, which is associated with rank‐specific GC management. Am. J. Primatol. 77:688–700, 2015
  • De Sousa, H. (2011). Changes in the language of perception in Cantonese. The Senses & Society, 6(1), 38-47. doi:10.2752/174589311X12893982233678.

    Abstract

    The way a language encodes sensory experiences changes over time, and often this correlates with other changes in the society. There are noticeable differences in the language of perception between older and younger speakers of Cantonese in Hong Kong and Macau. Younger speakers make finer distinctions in the distal senses, but have less knowledge of the finer categories of the proximal senses than older speakers. The difference in the language of perception between older and younger speakers probably reflects the rapid changes that happened in Hong Kong and Macau in the last fifty years, from an underdeveloped and lessliterate society, to a developed and highly literate society. In addition to the increase in literacy, the education system has also undergone significant Westernization. Western-style education systems have most likely created finer categorizations in the distal senses. At the same time, the traditional finer distinctions of the proximal senses have become less salient: as the society became more urbanized and sanitized, people have had fewer opportunities to experience the variety of olfactory sensations experienced by their ancestors. This case study investigating interactions between social-economic 'development' and the elaboration of the senses hopefully contributes to the study of the ineffability of senses.
  • De Sousa, H., Langella, F., & Enfield, N. J. (2015). Temperature terms in Lao, Southern Zhuang, Southern Pinghua and Cantonese. In M. Koptjevskaja-Tamm (Ed.), The linguistics of temperature (pp. 594-638). Amsterdam: Benjamins.
  • Spada, D., Verga, L., Iadanza, A., Tettamanti, M., & Perani, D. (2014). The auditory scene: An fMRI study on melody and accompaniment in professional pianists. NeuroImage, 102(2), 764-775. doi:10.1016/j.neuroimage.2014.08.036.

    Abstract

    The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes.
  • Spaeth, J. M., Hunter, C. S., Bonatakis, L., Guo, M., French, C. A., Slack, I., Hara, M., Fisher, S. E., Ferrer, J., Morrisey, E. E., Stanger, B. Z., & Stein, R. (2015). The FOXP1, FOXP2 and FOXP4 transcription factors are required for islet alpha cell proliferation and function in mice. Diabetologia, 58, 1836-1844. doi:10.1007/s00125-015-3635-3.

    Abstract

    Aims/hypothesis Several forkhead box (FOX) transcription factor family members have important roles in controlling pancreatic cell fates and maintaining beta cell mass and function, including FOXA1, FOXA2 and FOXM1. In this study we have examined the importance of FOXP1, FOXP2 and FOXP4 of the FOXP subfamily in islet cell development and function. Methods Mice harbouring floxed alleles for Foxp1, Foxp2 and Foxp4 were crossed with pan-endocrine Pax6-Cre transgenic mice to generate single and compound Foxp mutant mice. Mice were monitored for changes in glucose tolerance by IPGTT, serum insulin and glucagon levels by radioimmunoassay, and endocrine cell development and proliferation by immunohistochemistry. Gene expression and glucose-stimulated hormone secretion experiments were performed with isolated islets. Results Only the triple-compound Foxp1/2/4 conditional knockout (cKO) mutant had an overt islet phenotype, manifested physiologically by hypoglycaemia and hypoglucagonaemia. This resulted from the reduction in glucagon-secreting alpha cell mass and function. The proliferation of alpha cells was profoundly reduced in Foxp1/2/4 cKO islets through the effects on mediators of replication (i.e. decreased Ccna2, Ccnb1 and Ccnd2 activators, and increased Cdkn1a inhibitor). Adult islet Foxp1/2/4 cKO beta cells secrete insulin normally while the remaining alpha cells have impaired glucagon secretion. Conclusions/interpretation Collectively, these findings reveal an important role for the FOXP1, 2, and 4 proteins in governing postnatal alpha cell expansion and function.
  • Stergiakouli, E., Gaillard, R., Tavaré, J. M., Balthasar, N., Loos, R. J., Taal, H. R., Evans, D. M., Rivadeneira, F., St Pourcain, B., Uitterlinden, A. G., Kemp, J. P., Hofman, A., Ring, S. M., Cole, T. J., Jaddoe, V. W. V., Davey Smith, G., & Timpson, N. J. (2014). Genome-wide association study of height-adjusted BMI in childhood identifies functional variant in ADCY3. Obesity, 22(10), 2252-2259. doi:10.1002/oby.20840.

    Abstract

    OBJECTIVE: Genome-wide association studies (GWAS) of BMI are mostly undertaken under the assumption that "kg/m(2) " is an index of weight fully adjusted for height, but in general this is not true. The aim here was to assess the contribution of common genetic variation to a adjusted version of that phenotype which appropriately accounts for covariation in height in children. METHODS: A GWAS of height-adjusted BMI (BMI[x] = weight/height(x) ), calculated to be uncorrelated with height, in 5809 participants (mean age 9.9 years) from the Avon Longitudinal Study of Parents and Children (ALSPAC) was performed. RESULTS: GWAS based on BMI[x] yielded marked differences in genomewide results profile. SNPs in ADCY3 (adenylate cyclase 3) were associated at genome-wide significance level (rs11676272 (0.28 kg/m(3.1) change per allele G (0.19, 0.38), P = 6 × 10(-9) ). In contrast, they showed marginal evidence of association with conventional BMI [rs11676272 (0.25 kg/m(2) (0.15, 0.35), P = 6 × 10(-7) )]. Results were replicated in an independent sample, the Generation R study. CONCLUSIONS: Analysis of BMI[x] showed differences to that of conventional BMI. The association signal at ADCY3 appeared to be driven by a missense variant and it was strongly correlated with expression of this gene. Our work highlights the importance of well understood phenotype use (and the danger of convention) in characterising genetic contributions to complex traits.

    Additional information

    oby20840-sup-0001-suppinfo.docx
  • Stergiakouli, E., Martin, J., Hamshere, M. L., Langley, K., Evans, D. M., St Pourcain, B., Timpson, N. J., Owen, M. J., O'Donovan, M., Thapar, A., & Davey Smith, G. (2015). Shared Genetic Influences Between Attention-Deficit/Hyperactivity Disorder (ADHD) Traits in Children and Clinical ADHD. Journal of the American Academy of Child and Adolescent Psychiatry, 54(4), 322-327. doi:10.1016/j.jaac.2015.01.010.
  • Stewart, A. J., Kidd, E., & Haigh, M. (2009). Early sensitivity to discourse-level anomalies: Evidence from self-paced reading. Discourse Processes, 46(1), 46-69. doi:10.1080/01638530802629091.

    Abstract

    Two word-by-word, self-paced reading experiments investigated the speed with which readers were sensitive to discourse-level anomalies. An account arguing for delayed sensitivity (Guzman & Klin, 2000 Guzman, A. E. and Klin, C. M. 2000. Maintaining global coherence in reading: The role of sentence boundaries.. Memory & Cognition, 28: 722–730. [PubMed], [Web of Science ®], [Google Scholar]) was contrasted with one allowing for rapid sensitivity (Myers & O'Brien, 1998 Myers, J. L. and O'Brien, E. J. 1998. Accessing the discourse representation during reading.. Discourse Processes, 26: 131–157. [Taylor & Francis Online], [Web of Science ®], [Google Scholar]). Anomalies related to spatial information (Experiment 1) and character-attribute information (Experiment 2) were examined. Both experiments found that readers displayed rapid sensitivity to the anomalous information. A reading time penalty was observed for the region of text containing the anomalous information. This finding is most compatible with an account of text processing whereby incoming words are rapidly evaluated with respect to prior context. They are not consistent with an account that argues for delayed integration. Results are discussed in light of their implications for competing models of text processing.
  • Stewart, A. J., Haigh, M., & Kidd, E. (2009). An investigation into the online processing of counterfactual and indicative conditionals. Quarterly Journal of Experimental Psychology, 62(11), 2113-2125. doi:10.1080/17470210902973106.

    Abstract

    The ability to represent conditional information is central to human cognition. In two self-paced reading experiments we investigated how readers process counterfactual conditionals (e.g., If Darren had been athletic, he could probably have played on the rugby team ) and indicative conditionals (e.g., If Darren is athletic, he probably plays on the rugby team ). In Experiment 1 we focused on how readers process counterfactual conditional sentences. We found that processing of the antecedent of counterfactual conditionals was rapidly constrained by prior context (i.e., knowing whether Darren was or was not athletic). A reading-time penalty was observed for the critical region of text comprising the last word of the antecedent and the first word of the consequent when the information in the antecedent did not fit with prior context. In Experiment 2 we contrasted counterfactual conditionals with indicative conditionals. For counterfactual conditionals we found the same effect on the critical region as we found in Experiment 1. In contrast, however, we found no evidence that processing of the antecedent of indicative conditionals was constrained by prior context. For indicative conditionals (but not for counterfactual conditionals), the results we report are consistent with the suppositional account of conditionals. We propose that current theories of conditionals need to be able to account for online processing differences between indicative and counterfactual conditionals
  • Stine-Morrow, E., Payne, B., Roberts, B., Kramer, A., Morrow, D., Payne, L., Hill, P., Jackson, J., Gao, X., Noh, S., Janke, M., & Parisi, J. (2014). Training versus engagement as paths to cognitive enrichment with aging. Psychology and Aging, 29, 891-906. doi:10.1037/a0038244.

    Abstract

    While a training model of cognitive intervention targets the improvement of particular skills through instruction and practice, an engagement model is based on the idea that being embedded in an intellectually and socially complex environment can impact cognition, perhaps even broadly, without explicit instruction. We contrasted these 2 models of cognitive enrichment by randomly assigning healthy older adults to a home-based inductive reasoning training program, a team-based competitive program in creative problem solving, or a wait-list control. As predicted, those in the training condition showed selective improvement in inductive reasoning. Those in the engagement condition, on the other hand, showed selective improvement in divergent thinking, a key ability exercised in creative problem solving. On average, then, both groups appeared to show ability-specific effects. However, moderators of change differed somewhat for those in the engagement and training interventions. Generally, those who started either intervention with a more positive cognitive profile showed more cognitive growth, suggesting that cognitive resources enabled individuals to take advantage of environmental enrichment. Only in the engagement condition did initial levels of openness and social network size moderate intervention effects on cognition, suggesting that comfort with novelty and an ability to manage social resources may be additional factors contributing to the capacity to take advantage of the environmental complexity associated with engagement. Collectively, these findings suggest that training and engagement models may offer alternative routes to cognitive resilience in late life

    Files private

    Request files
  • Stivers, T., & Rossano, F. (2010). A scalar view of response relevance. Research on Language and Social Interaction, 43, 49-56. doi:10.1080/08351810903471381.
  • Stivers, T. (2010). An overview of the question-response system in American English conversation. Journal of Pragmatics, 42, 2772-2781. doi:10.1016/j.pragma.2010.04.011.

    Abstract

    This article, part of a 10 language comparative project on question–response sequences, discusses these sequences in American English conversation. The data are video-taped spontaneous naturally occurring conversations involving two to five adults. Relying on these data I document the basic distributional patterns of types of questions asked (polar, Q-word or alternative as well as sub-types), types of social actions implemented by these questions (e.g., repair initiations, requests for confirmation, offers or requests for information), and types of responses (e.g., repetitional answers or yes/no tokens). I show that declarative questions are used more commonly in conversation than would be suspected by traditional grammars of English and questions are used for a wider range of functions than grammars would suggest. Finally, this article offers distributional support for the idea that responses that are better “fitted” with the question are preferred.
  • Stivers, T., & Enfield, N. J. (2010). A coding scheme for question-response sequences in conversation. Journal of Pragmatics, 42, 2620-2626. doi:10.1016/j.pragma.2010.04.002.

    Abstract

    no abstract is available for this article
  • Stivers, T., & Rossano, F. (2010). Mobilizing response. Research on Language and Social Interaction, 43, 3-31. doi:10.1080/08351810903471258.

    Abstract

    A fundamental puzzle in the organization of social interaction concerns how one individual elicits a response from another. This article asks what it is about some sequentially initial turns that reliably mobilizes a coparticipant to respond and under what circumstances individuals are accountable for producing a response. Whereas a linguistic approach suggests that this is what oquestionso (more generally) and interrogativity (more narrowly) are for, a sociological approach to social interaction suggests that the social action a person is implementing mobilizes a recipient's response. We find that although both theories have merit, neither adequately solves the puzzle. We argue instead that different actions mobilize response to different degrees. Speakers then design their turns to perform actions, and with particular response-mobilizing features of turn-design speakers can hold recipients more accountable for responding or not. This model of response relevance allows sequential position, action, and turn design to each contribute to response relevance.
  • Stivers, T. (2011). Morality and question design: 'Of course' as contesting a presupposition of askability. In T. Stivers, L. Mondada, & J. Steensig (Eds.), The morality of knowledge in conversation (pp. 82-106). Cambridge: Cambridge University Press.
  • Stivers, T., Mondada, L., & Steensig, J. (2011). Knowledge, morality and affiliation in social interaction. In T. Stivers, L. Mondada, & J. Steensig (Eds.), The morality of knowledge in conversation (pp. 3-26). Cambridge: Cambridge University Press.
  • Stivers, T., Enfield, N. J., & Levinson, S. C. (2010). Question-response sequences in conversation across ten languages: An introduction. Journal of Pragmatics, 42, 2615-2619. doi:10.1016/j.pragma.2010.04.001.
  • Stivers, T. (1998). Prediagnostic commentary in veterinarian-client interaction. Research on Language and Social Interaction, 31(2), 241-277. doi:10.1207/s15327973rlsi3102_4.
  • Stivers, T., & Hayashi, M. (2010). Transformative answers: One way to resist a question's constraints. Language in Society, 39, 1-25. doi:10.1017/S0047404509990637.

    Abstract

    A number of Conversation Analytic studies have documented that question recipients have a variety of ways to push against the constraints that questions impose on them. This article explores the concept of transformative answers – answers through which question recipients retroactively adjust the question posed to them. Two main sorts of adjustments are discussed: question term transformations and question agenda transformations. It is shown that the operations through which interactants implement term transformations are different from the operations through which they implement agenda transformations. Moreover, term-transforming answers resist only the question’s design, while agenda-transforming answers effectively resist both design and agenda, thus implying that agenda-transforming answers resist more strongly than design-transforming answers. The implications of these different sorts of transformations for alignment and affiliation are then explored.
  • Stivers, T., Enfield, N. J., Brown, P., Englert, C., Hayashi, M., Heinemann, T., Hoymann, G., Rossano, F., De Ruiter, J. P., Yoon, K.-E., & Levinson, S. C. (2009). Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences of the United States of America, 106 (26), 10587-10592. doi:10.1073/pnas.0903616106.

    Abstract

    Informal verbal interaction is the core matrix for human social life. A mechanism for coordinating this basic mode of interaction is a system of turn-taking that regulates who is to speak and when. Yet relatively little is known about how this system varies across cultures. The anthropological literature reports significant cultural differences in the timing of turn-taking in ordinary conversation. We test these claims and show that in fact there are striking universals in the underlying pattern of response latency in conversation. Using a worldwide sample of 10 languages drawn from traditional indigenous communities to major world languages, we show that all of the languages tested provide clear evidence for a general avoidance of overlapping talk and a minimization of silence between conversational turns. In addition, all of the languages show the same factors explaining within-language variation in speed of response. We do, however, find differences across the languages in the average gap between turns, within a range of 250 ms from the cross-language mean. We believe that a natural sensitivity to these tempo differences leads to a subjective perception of dramatic or even fundamental differences as offered in ethnographic reports of conversational style. Our empirical evidence suggests robust human universals in this domain, where local variations are quantitative only, pointing to a single shared infrastructure for language use with likely ethological foundations.

    Additional information

    Stivers_2009_universals_suppl.pdf
  • Stolk, A., Noordzij, M. L., Verhagen, L., Volman, I., Schoffelen, J.-M., Oostenveld, R., Hagoort, P., & Toni, I. (2014). Cerebral coherence between communicators marks the emergence of meaning. Proceedings of the National Academy of Sciences of the United States of America, 111, 18183-18188. doi:10.1073/pnas.1414886111.

    Abstract

    How can we understand each other during communicative interactions? An influential suggestion holds that communicators are primed by each other’s behaviors, with associative mechanisms automatically coordinating the production of communicative signals and the comprehension of their meanings. An alternative suggestion posits that mutual understanding requires shared conceptualizations of a signal’s use, i.e., “conceptual pacts” that are abstracted away from specific experiences. Both accounts predict coherent neural dynamics across communicators, aligned either to the occurrence of a signal or to the dynamics of conceptual pacts. Using coherence spectral-density analysis of cerebral activity simultaneously measured in pairs of communicators, this study shows that establishing mutual understanding of novel signals synchronizes cerebral dynamics across communicators’ right temporal lobes. This interpersonal cerebral coherence occurred only within pairs with a shared communicative history, and at temporal scales independent from signals’ occurrences. These findings favor the notion that meaning emerges from shared conceptualizations of a signal’s use.
  • Stolk, A., Noordzij, M. L., Volman, I., Verhagen, L., Overeem, S., van Elswijk, G., Bloem, B., Hagoort, P., & Toni, I. (2014). Understanding communicative actions: A repetitive TMS study. Cortex, 51, 25-34. doi:10.1016/j.cortex.2013.10.005.

    Abstract

    Despite the ambiguity inherent in human communication, people are remarkably efficient in establishing mutual understanding. Studying how people communicate in novel settings provides a window into the mechanisms supporting the human competence to rapidly generate and understand novel shared symbols, a fundamental property of human communication. Previous work indicates that the right posterior superior temporal sulcus (pSTS) is involved when people understand the intended meaning of novel communicative actions. Here, we set out to test whether normal functioning of this cerebral structure is required for understanding novel communicative actions using inhibitory low-frequency repetitive transcranial magnetic stimulation (rTMS). A factorial experimental design contrasted two tightly matched stimulation sites (right pSTS vs. left MT+, i.e. a contiguous homotopic task-relevant region) and tasks (a communicative task vs. a visual tracking task that used the same sequences of stimuli). Overall task performance was not affected by rTMS, whereas changes in task performance over time were disrupted according to TMS site and task combinations. Namely, rTMS over pSTS led to a diminished ability to improve action understanding on the basis of recent communicative history, while rTMS over MT+ perturbed improvement in visual tracking over trials. These findings qualify the contributions of the right pSTS to human communicative abilities, showing that this region might be necessary for incorporating previous knowledge, accumulated during interactions with a communicative partner, to constrain the inferential process that leads to action understanding.
  • Stolker, C. J. J. M., & Poletiek, F. H. (1998). Smartengeld - Wat zijn we eigenlijk aan het doen? Naar een juridische en psychologische evaluatie. In F. Stadermann (Ed.), Bewijs en letselschade (pp. 71-86). Lelystad, The Netherlands: Koninklijke Vermande.
  • Suppes, P., Böttner, M., & Liang, L. (1998). Machine Learning of Physics Word Problems: A Preliminary Report. In A. Aliseda, R. van Glabbeek, & D. Westerståhl (Eds.), Computing Natural Language (pp. 141-154). Stanford, CA, USA: CSLI Publications.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1998). Understanding ambiguous words in sentence contexts: Electrophysiological evidence for delayed contextual selection in Broca's aphasia. Neuropsychologia, 36(8), 737-761. doi:10.1016/S0028-3932(97)00174-7.

    Abstract

    This study investigates whether spoken sentence comprehension deficits in Broca's aphasics results from their inability to access the subordinate meaning of ambiguous words (e.g. bank), or alternatively, from a delay in their selection of the contextually appropriate meaning. Twelve Broca's aphasics and twelve elderly controls were presented with lexical ambiguities in three context conditions, each followed by the same target words. In the concordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was related to the target. In the discordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was incompatible with the target.In the unrelated condition, the sentence-final word was unambiguous and unrelated to the target. The task of the subjects was to listen attentively to the stimuli The activational status of the ambiguous sentence-final words was inferred from the amplitude of the N399 to the targets at two inter-stimulus intervals (ISIs) (100 ms and 1250 ms). At the short ISI, the Broca's aphasics showed clear evidence of activation of the subordinate meaning. In contrast to elderly controls, however, the Broca's aphasics were not successful at selecting the appropriate meaning of the ambiguity in the short ISI version of the experiment. But at the long ISI, in accordance with the performance of the elderly controls, the patients were able to successfully complete the contextual selection process. These results indicate that Broca's aphasics are delayed in the process of contextual selection. It is argued that this finding of delayed selection is compatible with the idea that comprehension deficits in Broca's aphasia result from a delay in the process of integrating lexical information.
  • De Swart, P., & Van Bergen, G. (2014). Unscrambling the lexical nature of weak definites. In A. Aguilar-Guevara, B. Le Bruyn, & J. Zwarts (Eds.), Weak referentiality (pp. 287-310). Amsterdam: Benjamins.

    Abstract

    We investigate how the lexical nature of weak definites influences the phenomenon of direct object scrambling in Dutch. Earlier experiments have indicated that weak definites are more resistant to scrambling than strong definites. We examine how the notion of weak definiteness used in this experimental work can be reduced to lexical connectedness. We explore four different ways of quantifying the relation between a direct object and the verb. Our results show that predictability of a verb given the object (verb cloze probability) provides the best fit to the weak/strong distinction used in the earlier experiments
  • Sweegers, C. C. G., Takashima, A., Fernández, G., & Talamini, L. M. (2015). Neural mechanisms supporting the extraction of general knowledge across episodic memories. NeuroImage, 87, 138-146. doi:10.1016/j.neuroimage.2013.10.063.

    Abstract

    General knowledge acquisition entails the extraction of statistical regularities from the environment. At high levels of complexity, this may involve the extraction, and consolidation, of associative regularities across event memories. The underlying neural mechanisms would likely involve a hippocampo-neocortical dialog, as proposed previously for system-level consolidation. To test these hypotheses, we assessed possible differences in consolidation between associative memories containing cross-episodic regularities and unique associative memories. Subjects learned face–location associations, half of which responded to complex regularities regarding the combination of facial features and locations, whereas the other half did not. Importantly, regularities could only be extracted over hippocampus-encoded, associative aspects of the items. Memory was assessed both immediately after encoding and 48 h later, under fMRI acquisition. Our results suggest that processes related to system-level reorganization occur preferentially for regular associations across episodes. Moreover, the build-up of general knowledge regarding regular associations appears to involve the coordinated activity of the hippocampus and mediofrontal regions. The putative cross-talk between these two regions might support a mechanism for regularity extraction. These findings suggest that the consolidation of cross-episodic regularities may be a key mechanism underlying general knowledge acquisition.
  • Swift, M. (1998). [Book review of LOUIS-JACQUES DORAIS, La parole inuit: Langue, culture et société dans l'Arctique nord-américain]. Language in Society, 27, 273-276. doi:10.1017/S0047404598282042.

    Abstract

    This volume on Inuit speech follows the evolution of a native language of the North American Arctic, from its historical roots to its present-day linguistic structure and patterns of use from Alaska to Greenland. Drawing on a wide range of research from the fields of linguistics, anthropology, and sociology, Dorais integrates these diverse perspectives in a comprehensive view of native language development, maintenance, and use under conditions of marginalization due to social transition.
  • Tagliapietra, L., Fanari, R., De Candia, C., & Tabossi, P. (2009). Phonotactic regularities in the segmentation of spoken Italian. Quarterly Journal of Experimental Psychology, 62(2), 392 -415. doi:10.1080/17470210801907379.

    Abstract

    Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners' sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittaacute in cittaacuteu.ba) than when they were aligned (e.g., cittaacute in cittaacute.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners' sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.

    Files private

    Request files
  • Tagliapietra, L., Fanari, R., Collina, S., & Tabossi, P. (2009). Syllabic effects in Italian lexical access. Journal of Psycholinguistic Research, 38(6), 511-526. doi:10.1007/s10936-009-9116-4.

    Abstract

    Two cross-modal priming experiments tested whether lexical access is constrained by syllabic structure in Italian. Results extend the available Italian data on the processing of stressed syllables showing that syllabic information restricts the set of candidates to those structurally consistent with the intended word (Experiment 1). Lexical access, however, takes place as soon as possible and it is not delayed till the incoming input corresponds to the first syllable of the word. And, the initial activated set includes candidates whose syllabic structure does not match the intended word (Experiment 2). The present data challenge the early hypothesis that in Romance languages syllables are the units for lexical access during spoken word recognition. The implications of the results for our understanding of the role of syllabic information in language processing are discussed.
  • Tagliapietra, L., & McQueen, J. M. (2010). What and where in speech recognition: Geminates and singletons in spoken Italian. Journal of Memory and Language, 63, 306-323. doi:10.1016/j.jml.2010.05.001.

    Abstract

    Four cross-modal repetition priming experiments examined whether consonant duration in Italian provides listeners with information not only for segmental identification ("what" information: whether the consonant is a geminate or a singleton) but also for lexical segmentation (“where” information: whether the consonant is in word-initial or word-medial position). Italian participants made visual lexical decisions to words containing geminates or singletons, preceded by spoken primes (whole words or fragments) containing either geminates or singletons. There were effects of segmental identity (geminates primed geminate recognition; singletons primed singleton recognition), and effects of consonant position (regression analyses revealed graded effects of geminate duration only for geminates which can vary in position, and mixed-effect modeling revealed a positional effect for singletons only in low-frequency words). Durational information appeared to be more important for segmental identification than for lexical segmentation. These findings nevertheless indicate that the same kind of information can serve both "what" and "where" functions in speech comprehension, and that the perceptual processes underlying those functions are interdependent.
  • Takashima, A., Wagensveld, B., Van Turennout, M., Zwitserlood, P., Hagoort, P., & Verhoeven, L. (2014). Training-induced neural plasticity in visual-word decoding and the role of syllables. Neuropsychologia, 61, 299-314. doi:10.1016/j.neuropsychologia.2014.06.017.

    Abstract

    To investigate the neural underpinnings of word decoding, and how it changes as a function of repeated exposure, we trained Dutch participants repeatedly over the course of a month of training to articulate a set of novel disyllabic input strings written in Greek script to avoid the use of familiar orthographic representations. The syllables in the input were phonotactically legal combinations but non-existent in the Dutch language, allowing us to assess their role in novel word decoding. Not only trained disyllabic pseudowords were tested but also pseudowords with recombined patterns of syllables to uncover the emergence of syllabic representations. We showed that with extensive training, articulation became faster and more accurate for the trained pseudowords. On the neural level, the initial stage of decoding was reflected by increased activity in visual attention areas of occipito-temporal and occipito-parietal cortices, and in motor coordination areas of the precentral gyrus and the inferior frontal gyrus. After one month of training, memory representations for holistic information (whole word unit) were established in areas encompassing the angular gyrus, the precuneus and the middle temporal gyrus. Syllabic representations also emerged through repeated training of disyllabic pseudowords, such that reading recombined syllables of the trained pseudowords showed similar brain activation to trained pseudowords and were articulated faster than novel combinations of letter strings used in the trained pseudowords.
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2014). Richness of information about novel words influences how episodic and semantic memory networks interact during lexicalization. NeuroImage, 84, 265-278. doi:10.1016/j.neuroimage.2013.08.023.

    Abstract

    The complementary learning systems account of declarative memory suggests two distinct memory networks, a fast-mapping, episodic system involving the hippocampus, and a slower semantic memory system distributed across the neocortex in which new information is gradually integrated with existing representations. In this study, we investigated the extent to which these two networks are involved in the integration of novel words into the lexicon after extensive learning, and how the involvement of these networks changes after 24 hours. In particular, we explored whether having richer information at encoding influences the lexicalization trajectory. We trained participants with two sets of novel words, one where exposure was only to the words’ phonological forms (the form-only condition), and one where pictures of unfamiliar objects were associated with the words’ phonological forms (the picture-associated condition). A behavioral measure of lexical competition (indexing lexicalization) indicated stronger competition effects for the form-only words. Imaging (fMRI) results revealed greater involvement of phonological lexical processing areas immediately after training in the form-only condition, suggesting tight connections were formed between novel words and existing lexical entries already at encoding. Retrieval of picture-associated novel words involved the episodic/hippocampal memory system more extensively. Although lexicalization was weaker in the picture-associated condition, overall memory strength was greater when tested after a 24 hours’ delay, probably due to the availability of both episodic and lexical memory networks to aid retrieval. It appears that, during lexicalization of a novel word, the relative involvement of different memory networks differs according to the richness of the information about that word available at encoding.
  • Takaso, H., Eisner, F., Wise, R. J. S., & Scott, S. K. (2010). The effect of delayed auditory feedback on activity in the temporal lobe while speaking: A Positron Emission Tomography study. Journal of Speech, Language, and Hearing Research, 53, 226-236. doi:10.1044/1092-4388(2009/09-0009).

    Abstract

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many non-stuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission tomography (PET) was used to image regional cerebral blood flow changes, an index of neural activity, and assessed the influence of increasing amounts of delay. Results: Delayed auditory feedback led to increased activation in the bilateral superior temporal lobes, extending into posterior-medial auditory areas. Similar peaks in the temporal lobe were sensitive to increases in the amount of delay. A single peak in the temporal parietal junction responded to the amount of delay but not to the presence of a delay (relative to no delay). Conclusions: This study permitted distinctions to be made between the neural response to hearing one's voice at a delay, and the neural activity that correlates with this delay. Notably all the peaks showed some influence of the amount of delay. This result confirms a role for the posterior, sensori-motor ‘how’ system in the production of speech under conditions of delayed auditory feedback.
  • Tamaoka, K., Saito, N., Kiyama, S., Timmer, K., & Verdonschot, R. G. (2014). Is pitch accent necessary for comprehension by native Japanese speakers? - An ERP investigation. Journal of Neurolinguistics, 27(1), 31-40. doi:10.1016/j.jneuroling.2013.08.001.

    Abstract

    Not unlike the tonal system in Chinese, Japanese habitually attaches pitch accents to the production of words. However, in contrast to Chinese, few homophonic word-pairs are really distinguished by pitch accents (Shibata & Shibata, 1990). This predicts that pitch accent plays a small role in lexical selection for Japanese language comprehension. The present study investigated whether native Japanese speakers necessarily use pitch accent in the processing of accent-contrasted homophonic pairs (e.g., ame [LH] for 'candy' and ame [HI] for 'rain') measuring electroencephalographic (EEG) potentials. Electrophysiological evidence (i.e., N400) was obtained when a word was semantically incorrect for a given context but not for incorrectly accented homophones. This suggests that pitch accent indeed plays a minor role when understanding Japanese. (C) 2013 Elsevier Ltd. All rights reserved.
  • Tanner, D., Nicol, J., & Brehm, L. (2014). The time-course of feature interference in agreement comprehension: Multiple mechanisms and asymmetrical attraction. Journal of Memory and Language, 76, 195-215. doi:10.1016/j.jml.2014.07.003.

    Abstract

    Attraction interference in language comprehension and production may be as a result of common or different processes. In the present paper, we investigate attraction interference during language comprehension, focusing on the contexts in which interference arises and the time-course of these effects. Using evidence from event-related brain potentials (ERPs) and sentence judgment times, we show that agreement attraction in comprehension is best explained as morphosyntactic interference during memory retrieval. This stands in contrast to attraction as a message-level process involving the representation of the subject NP's number features, which is a strong contributor to attraction in production. We thus argue that the cognitive antecedents of agreement attraction in comprehension are non-identical with those of attraction in production, and moreover, that attraction in comprehension is primarily a consequence of similarity-based interference in cue-based memory retrieval processes. We suggest that mechanisms responsible for attraction during language comprehension are a subset of those involved in language production.
  • Tarenskeen, S., Broersma, M., & Geurts, B. (2015). Overspecification of color, pattern, and size: Salience, absoluteness, and consistency. Frontiers in Psychology, 6: 1703. doi:10.3389/fpsyg.2015.01703.

    Abstract

    The rates of overspecification of color, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Color and pattern are absolute and salient attributes, whereas size is relative and less salient. Additionally, a tendency toward consistent responses is assessed. Using a within-participants design, we find similar rates of color and pattern overspecification, which are both higher than the rate of size overspecification. Using a between-participants design, however, we find similar rates of pattern and size overspecification, which are both lower than the rate of color overspecification. This indicates that although many speakers are more likely to include color than pattern (probably because color is more salient), they may also treat pattern like color due to a tendency toward consistency. We find no increase in size overspecification when the salience of size is increased, suggesting that speakers are more likely to include absolute than relative attributes. However, we do find an increase in size overspecification when mentioning the attributes is triggered, which again shows that speakers tend to refer in a consistent manner, and that there are circumstances in which even size overspecification is frequently produced.
  • Tekcan, A. I., Yilmaz, E., Kaya Kızılö, B., Karadöller, D. Z., Mutafoğlu, M., & Erciyes, A. (2015). Retrieval and phenomenology of autobiographical memories in blind individuals. Memory, 23(3), 329-339. doi:10.1080/09658211.2014.886702.

    Abstract

    Although visual imagery is argued to be an essential component of autobiographical memory, there have been surprisingly few studies on autobiographical memory processes in blind individuals, who have had no or limited visual input. The purpose of the present study was to investigate how blindness affects retrieval and phenomenology of autobiographical memories. We asked 48 congenital/early blind and 48 sighted participants to recall autobiographical memories in response to six cue words, and to fill out the Autobiographical Memory Questionnaire measuring a number of variables including imagery, belief and recollective experience associated with each memory. Blind participants retrieved fewer memories and reported higher auditory imagery at retrieval than sighted participants. Moreover, within the blind group, participants with total blindness reported higher auditory imagery than those with some light perception. Blind participants also assigned higher importance, belief and recollection ratings to their memories than sighted participants. Importantly, these group differences remained the same for recent as well as childhood memories.
  • Telling, A. L., Kumar, S., Meyer, A. S., & Humphreys, G. W. (2010). Electrophysiological evidence of semantic interference in visual search. Journal of Cognitive Neuroscience, 22(10), 2212-2225. doi:10.1162/jocn.2009.21348.

    Abstract

    Visual evoked responses were monitored while participants searched for a target (e.g., bird) in a four-object display that could include a semantically related distractor (e.g., fish). The occurrence of both the target and the semantically related distractor modulated the N2pc response to the search display: The N2pc amplitude was more pronounced when the target and the distractor appeared in the same visual field, and it was less pronounced when the target and the distractor were in opposite fields, relative to when the distractor was absent. Earlier components (P1, N1) did not show any differences in activity across the different distractor conditions. The data suggest that semantic distractors influence early stages of selecting stimuli in multielement displays.
  • Telling, A. L., Meyer, A. S., & Humphreys, G. W. (2010). Distracted by relatives: Effects of frontal lobe damage on semantic distraction. Brain and Cognition, 73, 203-214. doi:10.1016/j.bandc.2010.05.004.

    Abstract

    When young adults carry out visual search, distractors that are semantically related, rather than unrelated, to targets can disrupt target selection (see [Belke et al., 2008] and [Moores et al., 2003]). This effect is apparent on the first eye movements in search, suggesting that attention is sometimes captured by related distractors. Here we assessed effects of semantically related distractors on search in patients with frontal-lobe lesions and compared them to the effects in age-matched controls. Compared with the controls, the patients were less likely to make a first saccade to the target and they were more likely to saccade to distractors (whether related or unrelated to the target). This suggests a deficit in a first stage of selecting a potential target for attention. In addition, the patients made more errors by responding to semantically related distractors on target-absent trials. This indicates a problem at a second stage of target verification, after items have been attended. The data suggest that frontal lobe damage disrupts both the ability to use peripheral information to guide attention, and the ability to keep separate the target of search from the related items, on occasions when related items achieve selection.
  • Ten Oever, S., Van Atteveldt, N., & Sack, A. T. (2015). Increased stimulus expectancy triggers low-frequency phase reset during restricted vigilance. Journal of Cognitive Neuroscience, 27(9), 1811-1822. doi:10.1162/jocn_a_00820.

    Abstract

    Temporal cues can be used to selectively attend to relevant information during abundant sensory stimulation. However, such cues differ vastly in the accuracy of their temporal estimates, ranging from very predictable to very unpredictable. When cues are strongly predictable, attention may facilitate selective processing by aligning relevant incoming information to high neuronal excitability phases of ongoing low-frequency oscillations. However, top-down effects on ongoing oscillations when temporal cues have some predictability, but also contain temporal uncertainties, are unknown. Here, we experimentally created such a situation of mixed predictability and uncertainty: A target could occur within a limited time window after cue but was always unpredictable in exact timing. Crucially to assess top-down effects in such a mixed situation, we manipulated target probability. High target likelihood, compared with low likelihood, enhanced delta oscillations more strongly as measured by evoked power and intertrial coherence. Moreover, delta phase modulated detection rates for probable targets. The delta frequency range corresponds with half-a-period to the target occurrence window and therefore suggests that low-frequency phase reset is engaged to produce a long window of high excitability when event timing is uncertain within a restricted temporal window.
  • Ten Oever, S., & Sack, A. T. (2015). Oscillatory phase shapes syllable perception. Proceedings of the National Academy of Sciences of the United States of America, 112(52), 15833-15837. doi:10.1073/pnas.1517519112.

    Abstract

    The role of oscillatory phase for perceptual and cognitive processes is being increasingly acknowledged. To date, little is known about the direct role of phase in categorical perception. Here we show in two separate experiments that the identification of ambiguous syllables that can either be perceived as / da/ or / ga/ is biased by the underlying oscillatory phase as measured with EEG and sensory entrainment to rhythmic stimuli. The measured phase difference in which perception is biased toward / da/ or / ga/ exactly matched the different temporal onset delays in natural audiovisual speech between mouth movements and speech sounds, which last 80 ms longer for / ga/ than for / da/. These results indicate the functional relationship between prestimulus phase and syllable identification, and signify that the origin of this phase relationship could lie in exposure and subsequent learning of unique audiovisual temporal onset differences.
  • Ten Oever, S., Schroeder, C. E., Poeppel, D., Van Atteveldt, N., & Zion-Golumbic, E. (2014). Rhythmicity and cross-modal temporal cues facilitate detection. Neuropsychologia, 63, 43-50. doi:10.1016/j.neuropsychologia.2014.08.008.

    Abstract

    Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we investigated the influence of two types of predictive temporal information on the perception of near-threshold auditory stimuli: 1) intrinsic temporal rhythmicity within an auditory stimulus stream and 2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality should decrease the threshold at which sounds are detected, beyond the advantage provided by each information source alone. Two experiments were conducted in which participants had to detect tones in noise. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual signal in half of the trials. We show that detection intensities are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, independent from response bias, and that this effect is even greater for rhythmic audiovisual presentation. These results suggest that both types of temporal information are used to optimally process sounds that occur at expected points in time (resulting in enhanced detection), and that multiple temporal cues are combined to improve temporal estimates. Our findings underscore the flexibility and proactivity of the perceptual system which uses within- and across-modality temporal cues to anticipate upcoming events and process them optimally. (C) 2014 Elsevier Ltd. All rights reserved.
  • Ter Avest, I. J., & Mulder, K. (2009). The Acquisition of Gender Agreement in the Determiner Phrase by Bilingual Children. Toegepaste Taalwetenschap in Artikelen, 81(1), 133-142.
  • Terrill, A. (2010). Complex predicates and complex clauses in Lavukaleve. In J. Bowden, N. P. Himmelman, & M. Ross (Eds.), A journey through Austronesian and Papuan linguistic and cultural space: Papers in honour of Andrew K. Pawley (pp. 499-512). Canberra: Pacific Linguistics.
  • Terrill, A. (2010). [Review of Bowern, Claire. 2008. Linguistic fieldwork: a practical guide]. Language, 86(2), 435-438. doi:10.1353/lan.0.0214.
  • Terrill, A. (2009). [Review of Felix K. Ameka, Alan Dench, and Nicholas Evans (eds). 2006. Catching language: The standing challenge of grammar writing]. Language Documentation & Conservation, 3(1), 132-137. Retrieved from http://hdl.handle.net/10125/4432.
  • Terrill, A. (2010). [Review of R. A. Blust The Austronesian languages. 2009. Canberra: Pacific Linguistics]. Oceanic Linguistics, 49(1), 313-316. doi:10.1353/ol.0.0061.

    Abstract

    In lieu of an abstract, here is a preview of the article. This is a marvelous, dense, scholarly, detailed, exhaustive, and ambitious book. In 800-odd pages, it seeks to describe the whole huge majesty of the Austronesian language family, as well as the history of the family, the history of ideas relating to the family, and all the ramifications of such topics. Blust doesn't just describe, he goes into exhaustive detail, and not just over a few topics, but over every topic he covers. This is an incredible achievement, representing a lifetime of experience. This is not a book to be read from cover to cover—it is a book to be dipped into, pondered, and considered, slowly and carefully. The book is not organized by area or subfamily; readers interested in one area or family can consult the authoritative work on Western Austronesian (Adelaar and Himmelmann 2005), or, for the Oceanic languages, Lynch, Ross, and Crowley (2002). Rather, Blust's stated aim "is to provide a comprehensive overview of Austronesian languages which integrates areal interests into a broader perspective" (xxiii). Thus the aim is more ambitious than just discussion of areal features or historical connections, but seeks to describe the interconnections between these. The Austronesian language family is very large, second only in size to Niger-Congo (xxii). It encompasses over 1,000 members, and its protolanguage has been dated back to 6,000 years ago (xxii). The exact groupings of some Austronesian languages are still under discussion, but broadly, the family is divided into ten major subgroups, nine of which are spoken in Taiwan, the homeland of the Austronesian family. The tenth, Malayo-Polynesian, is itself divided into two major groups: Western Malayo-Polynesian, which is spread throughout the Philippines, Indonesia, and mainland Southeast Asia to Madagascar; and Central-Eastern Malayo-Polynesian, spoken from eastern Indonesia throughout the Pacific. The geographic, cultural, and linguistic diversity of the family
  • Terrill, A. (2011). Languages in contact: An exploration of stability and change in the Solomon Islands. Oceanic Linguistics, 50(2), 312-337.

    Abstract

    The Papuan-Oceanic world has long been considered a hotbed of contact-induced linguistic change, and there have been a number of studies of deep linguistic influence between Papuan and Oceanic languages (like those by Thurston and Ross). This paper assesses the degree and type of contact-induced language change in the Solomon Islands, between the four Papuan languages—Bilua (spoken on Vella Lavella, Western Province), Touo (spoken on southern Rendova, Western Province), Savosavo (spoken on Savo Island, Central Province), and Lavukaleve (spoken in the Russell Islands, Central Province)—and their Oceanic neighbors. First, a claim is made for a degree of cultural homogeneity for Papuan and Oceanic-speaking populations within the Solomons. Second, lexical and grammatical borrowing are considered in turn, in an attempt to identify which elements in each of the four Papuan languages may have an origin in Oceanic languages—and indeed which elements in Oceanic languages may have their origin in Papuan languages. Finally, an assessment is made of the degrees of stability versus change in the Papuan and Oceanic languages of the Solomon Islands.
  • Terrill, A. (2011). Limits of the substrate: Substrate grammatical influence in Solomon Islands Pijin. In C. Lefebvre (Ed.), Creoles, their substrates, and language typology (pp. 513-529). Amsterdam: John Benjamins.

    Abstract

    What grammatical elements of a substrate language find their way into a creole? Grammatical features of the Oceanic substrate languages have been shown to be crucial in the development of Solomon Islands Pijin and of Melanesian Pidgin as a whole (Keesing 1988), so one might expect constructions which are very stable in the Oceanic family of languages to show up as substrate influence in the creole. This paper investigates three constructions in Oceanic languages which have been stable over thousands of years and persist throughout a majority of the Oceanic languages spoken in the Solomon Islands. The paper asks whether these are the sorts of constructions which could be expected to be reflected in Solomon Islands Pijin and shows that none of these persistent constructions appears in Solomon Islands Pijin at all. The absence of these constructions in Solomon Islands Pijin could be due to simplification: Creole genesis involves simplification of the substrate grammars. However, while simplification could be the explanation, it is not necessarily the case that all complex structures become simplified. For instance Solomon Islands Pijin pronoun paradigms are more complex than those in English, but the complexity is similar to that of the substrate languages. Thus it is not the case that all areas of a creole language are necessarily simplified. One must therefore look further than just simplification for an explanation of the presence or absence of stable grammatical features deriving from the substrate in creole languages. An account based on constraints in specific domains (Siegel 1999) is a better predictor of the behaviour of substrate constructions in Solomon Islands Pijin.
  • Terwisscha van Scheltinga, A. F., Bakker, S. C., Van Haren, N. E., Boos, H. B., Schnack, H. G., Cahn, W., Hoogman, M., Zwiers, M. P., Fernandez, G., Franke, B., Hulshoff Pol, H. E., & Kahn, R. S. (2014). Association study of fibroblast growth factor genes and brain volumes in schizophrenic patients and healthy controls. Psychiatric Genetics, 24, 283-284. doi:10.1097/YPG.0000000000000057.
  • Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Teunisse, J.-P., & Hagoort, P. (2011). Neural correlates of language comprehension in autism spectrum disorders: When language conflicts with world knowledge. Neuropsychologia, 49, 1095-1104. doi:10.1016/j.neuropsychologia.2011.01.018.

    Abstract

    In individuals with ASD, difficulties with language comprehension are most evident when higher-level semantic-pragmatic language processing is required, for instance when context has to be used to interpret the meaning of an utterance. Until now, it is unclear at what level of processing and for what type of context these difficulties in language comprehension occur. Therefore, in the current fMRI study, we investigated the neural correlates of the integration of contextual information during auditory language comprehension in 24 adults with ASD and 24 matched control participants. Different levels of context processing were manipulated by using spoken sentences that were correct or contained either a semantic or world knowledge anomaly. Our findings demonstrated significant differences between the groups in inferior frontal cortex that were only present for sentences with a world knowledge anomaly. Relative to the ASD group, the control group showed significantly increased activation in left inferior frontal gyrus (LIFG) for sentences with a world knowledge anomaly compared to correct sentences. This effect possibly indicates reduced integrative capacities of the ASD group. Furthermore, world knowledge anomalies elicited significantly stronger activation in right inferior frontal gyrus (RIFG) in the control group compared to the ASD group. This additional RIFG activation probably reflects revision of the situation model after new, conflicting information. The lack of recruitment of RIFG is possibly related to difficulties with exception handling in the ASD group.

    Files private

    Request files
  • Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Kan, C. C., Tendolkar, I., & Hagoort, P. (2009). Neural correlates of pragmatic language comprehension in autism disorders. Brain, 132, 1941-1952. doi:10.1093/brain/awp103.

    Abstract

    Difficulties with pragmatic aspects of communication are universal across individuals with autism spectrum disorders (ASDs). Here we focused on an aspect of pragmatic language comprehension that is relevant to social interaction in daily life: the integration of speaker characteristics inferred from the voice with the content of a message. Using functional magnetic resonance imaging (fMRI), we examined the neural correlates of the integration of voice-based inferences about the speaker’s age, gender or social background, and sentence content in adults with ASD and matched control participants. Relative to the control group, the ASD group showed increased activation in right inferior frontal gyrus (RIFG; Brodmann area 47) for speakerincongruent sentences compared to speaker-congruent sentences. Given that both groups performed behaviourally at a similar level on a debriefing interview outside the scanner, the increased activation in RIFG for the ASD group was interpreted as being compensatory in nature. It presumably reflects spill-over processing from the language dominant left hemisphere due to higher task demands faced by the participants with ASD when integrating speaker characteristics and the content of a spoken sentence. Furthermore, only the control group showed decreased activation for speaker-incongruent relative to speaker-congruent sentences in right ventral medial prefrontal cortex (vMPFC; Brodmann area 10), including right anterior cingulate cortex (ACC; Brodmann area 24/32). Since vMPFC is involved in self-referential processing related to judgments and inferences about self and others, the absence of such a modulation in vMPFC activation in the ASD group possibly points to atypical default self-referential mental activity in ASD. Our results show that in ASD compensatory mechanisms are necessary in implicit, low-level inferential processes in spoken language understanding. This indicates that pragmatic language problems in ASD are not restricted to high-level inferential processes, but encompass the most basic aspects of pragmatic language processing.
  • Tesink, C. M. J. Y., Petersson, K. M., Van Berkum, J. J. A., Van den Brink, D., Buitelaar, J. K., & Hagoort, P. (2009). Unification of speaker and meaning in language comprehension: An fMRI study. Journal of Cognitive Neuroscience, 21, 2085-2099. doi:10.1162/jocn.2008.21161.

    Abstract

    When interpreting a message, a listener takes into account several sources of linguistic and extralinguistic information. Here we focused on one particular form of extralinguistic information, certain speaker characteristics as conveyed by the voice. Using functional magnetic resonance imaging, we examined the neural structures involved in the unification of sentence meaning and voice-based inferences about the speaker's age, sex, or social background. We found enhanced activation in the inferior frontal gyrus bilaterally (BA 45/47) during listening to sentences whose meaning was incongruent with inferred speaker characteristics. Furthermore, our results showed an overlap in brain regions involved in unification of speaker-related information and those used for the unification of semantic and world knowledge information [inferior frontal gyrus bilaterally (BA 45/47) and left middle temporal gyrus (BA 21)]. These findings provide evidence for a shared neural unification system for linguistic and extralinguistic sources of information and extend the existing knowledge about the role of inferior frontal cortex as a crucial component for unification during language comprehension.
  • Theakston, A., Coates, A., & Holler, J. (2014). Handling agents and patients: Representational cospeech gestures help children comprehend complex syntactic constructions. Developmental Psychology, 50(7), 1973-1984. doi:10.1037/a0036694.

    Abstract

    Gesture is an important precursor of children’s early language development, for example, in the transition to multiword speech and as a predictor of later language abilities. However, it is unclear whether gestural input can influence children’s comprehension of complex grammatical constructions. In Study 1, 3- (M = 3 years 5 months) and 4-year-old (M = 4 years 6 months) children witnessed 2-participant actions described using the infrequent object-cleft-construction (OCC; It was the dog that the cat chased). Half saw an experimenter accompanying her descriptions with gestures representing the 2 participants and indicating the direction of action; the remaining children did not witness gesture. Children who witnessed gestures showed better comprehension of the OCC than those who did not witness gestures, both in and beyond the immediate physical context, but this benefit was restricted to the oldest 4-year-olds. In Study 2, a further group of older 4-year-old children (M = 4 years 7 months) witnessed the same 2-participant actions described by an experimenter and accompanied by gestures, but the gesture represented only the 2 participants and not the direction of the action. Again, a benefit of gesture was observed on subsequent comprehension of the OCC. We interpret these findings as demonstrating that representational cospeech gestures can help children comprehend complex linguistic structures by highlighting the roles played by the participants in the event.

    Files private

    Request files
  • Theakston, A., & Rowland, C. F. (2009). Introduction to Special Issue: Cognitive approaches to language acquisition. Cognitive Linguistics, 20(3), 477-480. doi:10.1515/COGL.2009.021.
  • Theakston, A. L., & Rowland, C. F. (2009). The acquisition of auxiliary syntax: A longitudinal elicitation study. Part 1: Auxiliary BE. Journal of Speech, Language, and Hearing Research, 52, 1449-1470. doi:10.1044/1092-4388(2009/08-0037).

    Abstract

    Purpose: The question of how and when English-speaking children acquire auxiliaries is the subject of extensive debate. Some researchers posit the existence of innately given Universal Grammar principles to guide acquisition, although some aspects of the auxiliary system must be learned from the input. Others suggest that auxiliaries can be learned without Universal Grammar, citing evidence of piecemeal learning in their support. This study represents a unique attempt to trace the development of auxiliary syntax by using a longitudinal elicitation methodology. Method: Twelve English-speaking children participated in 3 tasks designed to elicit auxiliary BE in declaratives and yes/no and wh-questions. They completed each task 6 times in total between the ages of 2;10 (years;months) and 3;6. Results: The children’s levels of correct use of 2 forms of BE (is,are) differed according to auxiliary form and sentence structure, and these relations changed over development. An analysis of the children’s errors also revealed complex interactions between these factors. Conclusion: These data are problematic for existing accounts of auxiliary acquisition and highlight the need for researchers working within both generativist and constructivist frameworks to develop more detailed theories of acquisition that directly predict the pattern of acquisition observed.
  • Thiebaut de Schotten, M., Dell'Acqua, F., Forkel, S. J., Simmons, A., Vergani, F., Murphy, D. G. M., & Catani, M. (2011). A lateralized brain network for visuospatial attention. Nature Neuroscience, 14, 1245-1246. doi:10.1038/nn.2905.

    Abstract

    Right hemisphere dominance for visuospatial attention is characteristic of most humans, but its anatomical basis remains unknown. We report the first evidence in humans for a larger parieto-frontal network in the right than left hemisphere, and a significant correlation between the degree of anatomical lateralization and asymmetry of performance on visuospatial tasks. Our results suggest that hemispheric specialization is associated with an unbalanced speed of visuospatial processing.

    Additional information

    supplementary material

Share this page