Publications

Displaying 1001 - 1100 of 1283
  • Seuren, P. A. M. (1998). [Review of the book Adverbial subordination; A typology and history of adverbial subordinators based on European languages by Bernd Kortmann]. Cognitive Linguistics, 9(3), 317-319. doi:10.1515/cogl.1998.9.3.315.
  • Seuren, P. A. M. (1973). [Review of the book Philosophy of language by Robert J. Clack and Bertrand Russell]. Foundations of Language, 9(3), 440-441.
  • Seuren, P. A. M. (1973). [Review of the book Semantics. An interdisciplinary reader in philosophy, linguistics and psychology ed. by Danny D. Steinberg and Leon A. Jakobovits]. Neophilologus, 57(2), 198-213. doi:10.1007/BF01514332.
  • Seuren, P. A. M. (1998). [Review of the book The Dutch pendulum: Linguistics in the Netherlands 1740-1900 by Jan Noordegraaf]. Bulletin of the Henry Sweet Society, 31, 46-50.
  • Seuren, P. A. M. (2004). [Review of the book A short history of Structural linguistics by Peter Matthews]. Linguistics, 42(1), 235-236. doi:10.1515/ling.2004.005.
  • Seuren, P. A. M. (2011). How I remember Evert Beth [In memoriam]. Synthese, 179(2), 207-210. doi:10.1007/s11229-010-9777-4.

    Abstract

    Without Abstract
  • Seuren, P. A. M. (1987). How relevant?: A commentary on Sperber and Wilson "Précis of relevance: Communication and cognition'. Behavioral and Brain Sciences, 10, 731-733. doi:10.1017/S0140525X00055564.
  • Seuren, P. A. M. (1963). Naar aanleiding van Dr. F. Balk-Smit Duyzentkunst "De Grammatische Functie". Levende Talen, 219, 179-186.
  • Seuren, P. A. M. (1987). Les paradoxes et le langage. Logique et Analyse, 30(120), 365-383.
  • Seuren, P. A. M., & Jaspers, D. (2014). Logico-cognitive structure in the lexicon. Language, 90(3), 607-643. doi:10.1353/lan.2014.0058.

    Abstract

    This study is a prolegomenon to a formal theory of the natural growth of conceptual and lexical fields. Negation, in the various forms in which it occurs in language, is found to be a powerful indicator. Other than in standard logic, natural language negation selects its complement within universes of discourse that are, for practical and functional reasons, restricted in various ways and to different degrees. It is hypothesized that a system of cognitive principles drives recursive processes of universe restriction, which in turn affects logical relations within the restricted universes. This approach provides a new perspective in which to view the well-known clashes between standard logic and natural logical intuitions. Lexicalization in language, especially the morphological incorporation of negation, is limited to highly restricted universes, which explains, for example, why a dog can be said not to be a Catholic, but also not to be a non-Catholic. Cognition is taken to restrict the universe of discourse to contrary pairs, splitting up one or both of the contraries into further subuniverses as a result of further cognitive activity. It is shown how a logically sound square of opposition , expanded to a hexagon (Jacoby 1950, 1960, Sesmat 1951, Blanché 1952, 1953, 1966), is generated by a hierarchy of universe restrictions, defining the notion ‘natural’ for logical systems. The logical hexagon contains two additional vertices, one for ‘some but not all’ (the Y-type) and one for ‘either all or none’ (the U-type), and incorporates both the classic square and the Hamiltonian triangle of contraries . Some is thus considered semantically ambiguous, representing two distinct quantifiers. The pragmaticist claim that the language system contains only the standard logical ‘some perhaps all’ and that the ‘some but not all’ meaning is pragmatically derived from the use of the system is rejected. Four principles are proposed according to which negation selects a complement from the subuniverses at hand. On the basis of these principles and of the logico-cognitive system proposed, the well-known nonlexicalization not only of *nall and *nand but also of many other nonlogical cases found throughout the lexicons of languages is analyzed and explained
  • Seuren, P. A. M. (1998). Obituary. Herman Christiaan Wekker 1943–1997. Journal of Pidgin and Creole Languages, 13(1), 159-162.
  • Seuren, P. A. M. (2014). The cognitive ontogenesis of predicate logid. Notre Dame Journal of Formal Logic, 55, 499-532. doi:10.1215/00294527-2798718.

    Abstract

    Since Aristotle and the Stoa, there has been a clash, worsened by modern predicate logic, between logically defined operator meanings and natural intuitions. Pragmatics has tried to neutralize the clash by an appeal to the Gricean conversational maxims. The present study argues that the pragmatic attempt has been unsuccessful. The “softness” of the Gricean explanation fails to do justice to the robustness of the intuitions concerned, leaving the relation between the principles evoked and the observed facts opaque. Moreover, there are cases where the Gricean maxims fail to apply. A more adequate solution consists in the devising of a sound natural logic, part of the innate cognitive equipment of mankind. This account has proved successful in conjunction with a postulated cognitive mechanism in virtue of which the universe of discourse (Un) is stepwise and recursively restricted, so that the negation selects different complements according to the degree of restrictedness of Un. This mechanism explains not only the discrepancies between natural logical intuitions and known logical systems; it also accounts for certain systematic lexicalization gaps in the languages of the world. Finally, it is shown how stepwise restriction of Un produces the ontogenesis of natural predicate logic, while at the same time resolving the intuitive clashes with established logical systems that the Gricean maxims sought to explain
  • Seuren, P. A. M. (1973). Zero-output rules. Foundations of Language, 10(2), 317-328.
  • Sha, L., Wu, X., Yao, Y., Wen, B., Feng, J., Sha, Z., Wang, X., Xing, X., Dou, W., Jin, L., Li, W., Wang, N., Shen, Y., Wang, J., Wu, L., & Xu, Q. (2014). Notch Signaling Activation Promotes Seizure Activity in Temporal Lobe Epilepsy. Molecular Neurobiology, 49(2), 633-644.

    Abstract

    Notch signaling in the nervous system is often regarded as a developmental pathway. However, recent studies have suggested that Notch is associated with neuronal discharges. Here, focusing on temporal lobe epilepsy, we found that Notch signaling was activated in the kainic acid (KA)-induced epilepsy model and in human epileptogenic tissues. Using an acute model of seizures, we showed that DAPT, an inhibitor of Notch, inhibited ictal activity. In contrast, pretreatment with exogenous Jagged1 to elevate Notch signaling before KA application had proconvulsant effects. In vivo, we demonstrated that the impacts of activated Notch signaling on seizures can in part be attributed to the regulatory role of Notch signaling on excitatory synaptic activity in CA1 pyramidal neurons. In vitro, we found that DAPT treatment impaired synaptic vesicle endocytosis in cultured hippocampal neurons. Taken together, our findings suggest a correlation between aberrant Notch signaling and epileptic seizures. Notch signaling is up-regulated in response to seizure activity, and its activation further promotes neuronal excitation of CA1 pyramidal neurons in acute seizures.
  • Shao, Z., Roelofs, A., Acheson, D. J., & Meyer, A. S. (2014). Electrophysiological evidence that inhibition supports lexical selection in picture naming. Brain Research, 1586, 130-142. doi:10.1016/j.brainres.2014.07.009.

    Abstract

    We investigated the neural basis of inhibitory control during lexical selection. Participants overtly named pictures while response times (RTs) and event-related brain potentials (ERPs) were recorded. The difficulty of lexical selection was manipulated by using object and action pictures with high name agreement (few response candidates) versus low name agreement (many response candidates). To assess the involvement of inhibition, we conducted delta plot analyses of naming RTs and examined the N2 component of the ERP. We found longer mean naming RTs and a larger N2 amplitude in the low relative to the high name agreement condition. For action naming we found a negative correlation between the slopes of the slowest delta segment and the difference in N2 amplitude between the low and high name agreement conditions. The converging behavioral and electrophysiological evidence suggests that selective inhibition is engaged to reduce competition during lexical selection in picture naming.
  • Shao, Z., Roelofs, A., & Meyer, A. S. (2014). Predicting naming latencies for action pictures: Dutch norms. Behavior Research Methods, 46, 274-283. doi:10.3758/s13428-013-0358-6.

    Abstract

    The present study provides Dutch norms for age of acquisition, familiarity, imageability, image agreement, visual complexity, word frequency, and word length (in syllables) for 124 line drawings of actions. Ratings were obtained from 117 Dutch participants. Word frequency was determined on the basis of the SUBTLEX-NL corpus (Keuleers, Brysbaert, & New, Behavior Research Methods, 42, 643–650, 2010). For 104 of the pictures, naming latencies and name agreement were determined in a separate naming experiment with 74 native speakers of Dutch. The Dutch norms closely corresponded to the norms for British English. Multiple regression analysis showed that age of acquisition, imageability, image agreement, visual complexity, and name agreement were significant predictors of naming latencies, whereas word frequency and word length were not. Combined with the results of a principal-component analysis, these findings suggest that variables influencing the processes of conceptual preparation and lexical selection affect latencies more strongly than do variables influencing word-form encoding.

    Additional information

    Shao_Behav_Res_2013_Suppl_Mat.doc
  • Shao, Z., Janse, E., Visser, K., & Meyer, A. S. (2014). What do verbal fluency tasks measure? Predictors of verbal fluency performance in older adults. Frontiers in Psychology, 5: 772. doi:10.3389/fpsyg.2014.00772.

    Abstract

    This study examined the contributions of verbal ability and executive control to verbal fluency performance in older adults (n=82). Verbal fluency was assessed in letter and category fluency tasks, and performance on these tasks was related to indicators of vocabulary size, lexical access speed, updating, and inhibition ability. In regression analyses the number of words produced in both fluency tasks was predicted by updating ability, and the speed of the first response was predicted by vocabulary size and, for category fluency only, lexical access speed. These results highlight the hybrid character of both fluency tasks, which may limit their usefulness for research and clinical purposes.
  • Shatzman, K. B., & Schiller, N. O. (2004). The word frequency effect in picture naming: Contrasting two hypotheses using homonym pictures. Brain and Language, 90(1-3), 160-169. doi:10.1016/S0093-934X(03)00429-2.

    Abstract

    Models of speech production disagree on whether or not homonyms have a shared word-form representation. To investigate this issue, a picture-naming experiment was carried out using Dutch homonyms of which both meanings could be presented as a picture. Naming latencies for the low-frequency meanings of homonyms were slower than for those of the high-frequency meanings. However, no frequency effect was found for control words, which matched the frequency of the homonyms meanings. Subsequent control experiments indicated that the difference in naming latencies for the homonyms could be attributed to processes earlier than wordform retrieval. Specifically, it appears that low name agreement slowed down the naming of the low-frequency homonym pictures.
  • Shayan, S., Ozturk, O., Bowerman, M., & Majid, A. (2014). Spatial metaphor in language can promote the development of cross-modal mappings in children. Developmental Science, 17(4), 636-643. doi:10.1111/desc.12157.

    Abstract

    Pitch is often described metaphorically: for example, Farsi and Turkish speakers use a ‘thickness’ metaphor (low sounds are ‘thick’ and high sounds are ‘thin’), while German and English speakers use a height metaphor (‘low’, ‘high’). This study examines how child and adult speakers of Farsi, Turkish, and German map pitch and thickness using a cross-modal association task. All groups, except for German children, performed significantly better than chance. German-speaking adults’ success suggests the pitch-to-thickness association can be learned by experience. But the fact that German children were at chance indicates that this learning takes time. Intriguingly, Farsi and Turkish children's performance suggests that learning cross-modal associations can be boosted through experience with consistent metaphorical mappings in the input language
  • Shayan, S., Ozturk, O., & Sicoli, M. A. (2011). The thickness of pitch: Crossmodal metaphors in Farsi, Turkish and Zapotec. The Senses & Society, 6(1), 96-105. doi:10.2752/174589311X12893982233911.

    Abstract

    Speakers use vocabulary for spatial verticality and size to describe pitch. A high–low contrast is common to many languages, but others show contrasts like thick–thin and big–small. We consider uses of thick for low pitch and thin for high pitch in three languages: Farsi, Turkish, and Zapotec. We ask how metaphors for pitch structure the sound space. In a language like English, high applies to both high-pitched as well as high-amplitude (loud) sounds; low applies to low-pitched as well as low-amplitude (quiet) sounds. Farsi, Turkish, and Zapotec organize sound in a different way. Thin applies to high pitch and low amplitude and thick to low pitch and high amplitude. We claim that these metaphors have their sources in life experiences. Musical instruments show co-occurrences of higher pitch with thinner, smaller objects and lower pitch with thicker, larger objects. On the other hand bodily experience can ground the high–low metaphor. A raised larynx produces higher pitch and lowered larynx lower pitch. Low-pitched sounds resonate the chest, a lower place than highpitched sounds. While both patterns are available from life experience, linguistic experience privileges one over the other, which results in differential structuring of the multiple dimensions of sound.
  • Shitova, N., Roelofs, A., Schriefers, H., Bastiaansen, M., & Schoffelen, J.-M. (2017). Control adjustments in speaking: Electrophysiology of the Gratton effect in picture naming. Cortex, 92, 289-303. doi:10.1016/j.cortex.2017.04.017.

    Abstract

    Accumulating evidence suggests that spoken word production requires different amounts of top-down control depending on the prevailing circumstances. For example, during Stroop-like tasks, the interference in response time (RT) is typically larger following congruent trials than following incongruent trials. This effect is called the Gratton effect, and has been taken to reflect top-down control adjustments based on the previous trial type. Such control adjustments have been studied extensively in Stroop and Eriksen flanker tasks (mostly using manual responses), but not in the picture-word interference (PWI) task, which is a workhorse of language production research. In one of the few studies of the Gratton effect in PWI, Van Maanen and Van Rijn (2010) examined the effect in picture naming RTs during dual-task performance. Based on PWI effect differences between dual-task conditions, they argued that the functional locus of the PWI effect differs between post-congruent trials (i.e., locus in perceptual and conceptual encoding) and post-incongruent trials (i.e., locus in word planning). However, the dual-task procedure may have contaminated the results. We therefore performed an EEG study on the Gratton effect in a regular PWI task. We observed a PWI effect in the RTs, in the N400 component of the event-related brain potentials, and in the midfrontal theta power, regardless of the previous trial type. Moreover, the RTs, N400, and theta power reflected the Gratton effect. These results provide evidence that the PWI effect arises at the word planning stage following both congruent and incongruent trials, while the amount of top-down control changes depending on the previous trial type.
  • Shitova, N., Roelofs, A., Schriefers, H., Bastiaansen, M. C. M., & Schoffelen, J.-M. (2017). Control adjustments in speaking: Electrophysiology of the Gratton effect in picture naming. Cortex, 92, 289-303. doi:10.1016/j.cortex.2017.04.017.

    Abstract

    Accumulating evidence suggests that spoken word production requires different amounts of top-down control depending on the prevailing circumstances. For example, during Stroop-like tasks, the interference in response time (RT) is typically larger following congruent trials than following incongruent trials. This effect is called the Gratton effect, and has been taken to reflect top-down control adjustments based on the previous trial type. Such control adjustments have been studied extensively in Stroop and Eriksen flanker tasks (mostly using manual responses), but not in the picture–word interference (PWI) task, which is a workhorse of language production research. In one of the few studies of the Gratton effect in PWI, Van Maanen and Van Rijn (2010) examined the effect in picture naming RTs during dual-task performance. Based on PWI effect differences between dual-task conditions, they argued that the functional locus of the PWI effect differs between post-congruent trials (i.e., locus in perceptual and conceptual encoding) and post-incongruent trials (i.e., locus in word planning). However, the dual-task procedure may have contaminated the results. We therefore performed an electroencephalography (EEG) study on the Gratton effect in a regular PWI task. We observed a PWI effect in the RTs, in the N400 component of the event-related brain potentials, and in the midfrontal theta power, regardless of the previous trial type. Moreover, the RTs, N400, and theta power reflected the Gratton effect. These results provide evidence that the PWI effect arises at the word planning stage following both congruent and incongruent trials, while the amount of top-down control changes depending on the previous trial type.
  • Shitova, N., Roelofs, A., Coughler, C., & Schriefers, H. (2017). P3 event-related brain potential reflects allocation and use of central processing capacity in language production. Neuropsychologia, 106, 138-145. doi:10.1016/j.neuropsychologia.2017.09.024.

    Abstract

    Allocation and use of central processing capacity have been associated with the P3 event-related brain potential amplitude in a large variety of non-linguistic tasks. However, little is known about the P3 in spoken language production. Moreover, the few studies that are available report opposing P3 effects when task complexity is manipulated. We investigated allocation and use of central processing capacity in a spoken phrase production task: Participants switched every second trial between describing pictures using noun phrases with one adjective (size only; simple condition, e.g., “the big desk”) or two adjectives (size and color; complex condition, e.g., “the big red desk”). Capacity allocation was manipulated by complexity, and capacity use by switching. Response time (RT) was longer for complex than for simple trials. Moreover, complexity and switching interacted: RTs were longer on switch than on repeat trials for simple phrases but shorter on switch than on repeat trials for complex phrases. P3 amplitude increased with complexity. Moreover, complexity and switching interacted: The complexity effect was larger on the switch trials than on the repeat trials. These results provide evidence that the allocation and use of central processing capacity in language production are differentially reflected in the P3 amplitude.
  • Shkaravska, O., & Van Eekelen, M. (2014). Univariate polynomial solutions of algebraic difference equations. Journal of Symbolic Computation, 60, 15-28. doi:10.1016/j.jsc.2013.10.010.

    Abstract

    Contrary to linear difference equations, there is no general theory of difference equations of the form G(P(x−τ1),…,P(x−τs))+G0(x)=0, with τi∈K, G(x1,…,xs)∈K[x1,…,xs] of total degree D⩾2 and G0(x)∈K[x], where K is a field of characteristic zero. This article is concerned with the following problem: given τi, G and G0, find an upper bound on the degree d of a polynomial solution P(x), if it exists. In the presented approach the problem is reduced to constructing a univariate polynomial for which d is a root. The authors formulate a sufficient condition under which such a polynomial exists. Using this condition, they give an effective bound on d, for instance, for all difference equations of the form G(P(x−a),P(x−a−1),P(x−a−2))+G0(x)=0 with quadratic G, and all difference equations of the form G(P(x),P(x−τ))+G0(x)=0 with G having an arbitrary degree.
  • Sicoli, M. A. (2010). Shifting voices with participant roles: Voice qualities and speech registers in Mesoamerica. Language in Society, 39(4), 521-553. doi:10.1017/S0047404510000436.

    Abstract

    Although an increasing number of sociolinguistic researchers consider functions of voice qualities as stylistic features, few studies consider cases where voice qualities serve as the primary signs of speech registers. This article addresses this gap through the presentation of a case study of Lachixio Zapotec speech registers indexed though falsetto, breathy, creaky, modal, and whispered voice qualities. I describe the system of contrastive speech registers in Lachixio Zapotec and then track a speaker on a single evening where she switches between three of these registers. Analyzing line-by-line conversational structure I show both obligatory and creative shifts between registers that co-occur with shifts in the participant structures of the situated social interactions. I then examine similar uses of voice qualities in other Zapotec languages and in the two unrelated language families Nahuatl and Mayan to suggest the possibility that such voice registers are a feature of the Mesoamerican culture area.
  • Silva, S., Inácio, F., Folia, V., & Petersson, K. M. (2017). Eye movements in implicit artificial grammar learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(9), 1387-1402. doi:10.1037/xlm0000350.

    Abstract

    Artificial grammar learning (AGL) has been probed with forced-choice behavioral tests (active tests). Recent attempts to probe the outcomes of learning (implicitly acquired knowledge) with eye-movement responses (passive tests) have shown null results. However, these latter studies have not tested for sensitivity effects, for example, increased eye movements on a printed violation. In this study, we tested for sensitivity effects in AGL tests with (Experiment 1) and without (Experiment 2) concurrent active tests (preference- and grammaticality classification) in an eye-tracking experiment. Eye movements discriminated between sequence types in passive tests and more so in active tests. The eye-movement profile did not differ between preference and grammaticality classification, and it resembled sensitivity effects commonly observed in natural syntax processing. Our findings show that the outcomes of implicit structured sequence learning can be characterized in eye tracking. More specifically, whole trial measures (dwell time, number of fixations) showed robust AGL effects, whereas first-pass measures (first-fixation duration) did not. Furthermore, our findings strengthen the link between artificial and natural syntax processing, and they shed light on the factors that determine performance differences in preference and grammaticality classification tests
  • Silva, S., Branco, P., Barbosa, F., Marques-Teixeira, J., Petersson, K. M., & Castro, S. L. (2014). Musical phrase boundaries, wrap-up and the closure positive shift. Brain Research, 1585, 99-107. doi:10.1016/j.brainres.2014.08.025.

    Abstract

    We investigated global integration (wrap-up) processes at the boundaries of musical phrases by comparing the effects of well and non-well formed phrases on event-related potentials time-locked to two boundary points: the onset and the offset of the boundary pause. The Closure Positive Shift, which is elicited at the boundary offset, was not modulated by the quality of phrase structure (well vs. non-well formed). In contrast, the boundary onset potentials showed different patterns for well and non-well formed phrases. Our results contribute to specify the functional meaning of the Closure Positive Shift in music, shed light on the large-scale structural integration of musical input, and raise new hypotheses concerning shared resources between music and language.
  • Silva, S., Petersson, K. M., & Castro, S. L. (2017). The effects of ordinal load on incidental temporal learning. Quarterly Journal of Experimental Psychology, 70(4), 664-674. doi:10.1080/17470218.2016.1146909.

    Abstract

    How can we grasp the temporal structure of events? A few studies have indicated that representations of temporal structure are acquired when there is an intention to learn, but not when learning is incidental. Response-to-stimulus intervals, uncorrelated temporal structures, unpredictable ordinal information, and lack of metrical organization have been pointed out as key obstacles to incidental temporal learning, but the literature includes piecemeal demonstrations of learning under all these circumstances. We suggest that the unacknowledged effects of ordinal load may help reconcile these conflicting findings, ordinal load referring to the cost of identifying the sequence of events (e.g., tones, locations) where a temporal pattern is embedded. In a first experiment, we manipulated ordinal load into simple and complex levels. Participants learned ordinal-simple sequences, despite their uncorrelated temporal structure and lack of metrical organization. They did not learn ordinal-complex sequences, even though there were no response-to-stimulus intervals nor unpredictable ordinal information. In a second experiment, we probed learning of ordinal-complex sequences with strong metrical organization, and again there was no learning. We conclude that ordinal load is a key obstacle to incidental temporal learning. Further analyses showed that the effect of ordinal load is to mask the expression of temporal knowledge, rather than to prevent learning.
  • Silva, S., Folia, V., Hagoort, P., & Petersson, K. M. (2017). The P600 in Implicit Artificial Grammar Learning. Cognitive Science, 41(1), 137-157. doi:10.1111/cogs.12343.

    Abstract

    The suitability of the Artificial Grammar Learning (AGL) paradigm to capture relevant aspects of the acquisition of linguistic structures has been empirically tested in a number of EEG studies. Some have shown a syntax-related P600 component, but it has not been ruled out that the AGL P600 effect is a response to surface features (e.g., subsequence familiarity) rather than the underlying syntax structure. Therefore, in this study, we controlled for the surface characteristics of the test sequences (associative chunk strength) and recorded the EEG before (baseline preference classification) and
    after (preference and grammaticality classification) exposure to a grammar. A typical, centroparietal P600 effect was elicited by grammatical violations after exposure, suggesting that the AGL P600 effect signals a response to structural irregularities. Moreover, preference and grammaticality classification showed a qualitatively similar ERP profile, strengthening the idea that the implicit structural mere
    exposure paradigm in combination with preference classification is a suitable alternative to the traditional grammaticality classification test.
  • Silva, S., Barbosa, F., Marques-Teixeira, J., Petersson, K. M., & Castro, S. L. (2014). You know when: Event-related potentials and theta/beat power indicate boundary prediction in music. Journal of Integrative Neuroscience, 13(1), 19-34. doi:10.1142/S0219635214500022.

    Abstract

    Neuroscientific and musicological approaches to music cognition indicate that listeners familiarized in the Western tonal tradition expect a musical phrase boundary at predictable time intervals. However, phrase boundary prediction processes in music remain untested. We analyzed event-related potentials (ERPs) and event-related induced power changes at the onset and offset of a boundary pause. We made comparisons with modified melodies, where the pause was omitted and filled by tones. The offset of the pause elicited a closure positive shift (CPS), indexing phrase boundary detection. The onset of the filling tones elicited significant increases in theta and beta powers. In addition, the P2 component was larger when the filling tones started than when they ended. The responses to boundary omission suggest that listeners expected to hear a boundary pause. Therefore, boundary prediction seems to coexist with boundary detection in music segmentation.
  • Simanova, I., Van Gerven, M., Oostenveld, R., & Hagoort, P. (2010). Identifying object categories from event-related EEG: Toward decoding of conceptual representations. Plos One, 5(12), E14465. doi:10.1371/journal.pone.0014465.

    Abstract

    Multivariate pattern analysis is a technique that allows the decoding of conceptual information such as the semantic category of a perceived object from neuroimaging data. Impressive single-trial classification results have been reported in studies that used fMRI. Here, we investigate the possibility to identify conceptual representations from event-related EEG based on the presentation of an object in different modalities: its spoken name, its visual representation and its written name. We used Bayesian logistic regression with a multivariate Laplace prior for classification. Marked differences in classification performance were observed for the tested modalities. Highest accuracies (89% correctly classified trials) were attained when classifying object drawings. In auditory and orthographical modalities, results were lower though still significant for some subjects. The employed classification method allowed for a precise temporal localization of the features that contributed to the performance of the classifier for three modalities. These findings could help to further understand the mechanisms underlying conceptual representations. The study also provides a first step towards the use of concept decoding in the context of real-time brain-computer interface applications.
  • Simanova, I., Hagoort, P., Oostenveld, R., & Van Gerven, M. A. J. (2014). Modality-independent decoding of semantic information from the human brain. Cerebral Cortex, 24, 426-434. doi:10.1093/cercor/bhs324.

    Abstract

    An ability to decode semantic information from fMRI spatial patterns has been demonstrated in previous studies mostly for 1 specific input modality. In this study, we aimed to decode semantic category independent of the modality in which an object was presented. Using a searchlight method, we were able to predict the stimulus category from the data while participants performed a semantic categorization task with 4 stimulus modalities (spoken and written names, photographs, and natural sounds). Significant classification performance was achieved in all 4 modalities. Modality-independent decoding was implemented by training and testing the searchlight method across modalities. This allowed the localization of those brain regions, which correctly discriminated between the categories, independent of stimulus modality. The analysis revealed large clusters of voxels in the left inferior temporal cortex and in frontal regions. These voxels also allowed category discrimination in a free recall session where subjects recalled the objects in the absence of external stimuli. The results show that semantic information can be decoded from the fMRI signal independently of the input modality and have clear implications for understanding the functional mechanisms of semantic memory.
  • Simon, E., & Sjerps, M. J. (2014). Developing non-native vowel representations: a study on child second language acquisition. COPAL: Concordia Working Papers in Applied Linguistics, 5, 693-708.

    Abstract

    This study examines what stage 9‐12‐year‐old Dutch‐speaking children have reached in the development of their L2 lexicon, focusing on its phonological specificity. Two experiments were carried out with a group of Dutch‐speaking children and adults learning English. In a first task, listeners were asked to judge Dutch words which were presented with either the target Dutch vowel or with an English vowel synthetically inserted. The second experiment was a mirror of the first, i.e. with English words and English or Dutch vowels inserted. It was examined to what extent the listeners accepted substitutions of Dutch vowels by English ones, and vice versa. The results of the experiments suggest that the children have not reached the same degree of phonological specificity of L2 words as the adults. Children not only experience a strong influence of their native vowel categories when listening to L2 words, they also apply less strict criteria.
  • Simon, E., & Sjerps, M. J. (2017). Phonological category quality in the mental lexicon of child and adult learners. International Journal of Bilingualism, 21(4), 474-499. doi:10.1177/1367006915626589.

    Abstract

    Aims and objectives: The aim was to identify which criteria children use to decide on the category membership of native and non-native vowels, and to get insight into the organization of phonological representations in the bilingual mind. Methodology: The study consisted of two cross-language mispronunciation detection tasks in which L2 vowels were inserted into L1 words and vice versa. In Experiment 1, 10- to 12-year-old Dutch-speaking children were presented with Dutch words which were either pronounced with the target Dutch vowel or with an English vowel inserted in the Dutch consonantal frame. Experiment 2 was a mirror of the first, with English words which were pronounced “correctly” or which were “mispronounced” with a Dutch vowel. Data and analysis: Analyses focused on extent to which child and adult listeners accepted substitutions of Dutch vowels by English ones, and vice versa. Findings: The results of Experiment 1 revealed that between the age of ten and twelve children have well-established phonological vowel categories in their native language. However, Experiment 2 showed that in their non-native language, children tended to accept mispronounced items which involve sounds from their native language. At the same time, though, they did not fully rely on their native phonemic inventory because the children accepted most of the correctly pronounced English items. Originality: While many studies have examined native and non-native perception by infants and adults, studies on first and second language perception of school-age children are rare. This study adds to the body of literature aimed at expanding our knowledge in this area. Implications: The study has implications for models of the organization of the bilingual mind: while proficient adult non-native listeners generally have clearly separated sets of phonological representations for their two languages, for non-proficient child learners the L1 phonology still exerts a strong influence on the L2 phonology.
  • Simon, E., Sjerps, M. J., & Fikkert, P. (2014). Phonological representations in children’s native and non-native lexicon. Bilingualism: Language and Cognition, 17(1), 3-21. doi:10.1017/S1366728912000764.

    Abstract

    This study investigated the phonological representations of vowels in children's native and non-native lexicons. Two experiments were mispronunciation tasks (i.e., a vowel in words was substituted by another vowel from the same language). These were carried out by Dutch-speaking 9–12-year-old children and Dutch-speaking adults, in their native (Experiment 1, Dutch) and non-native (Experiment 2, English) language. A third experiment tested vowel discrimination. In Dutch, both children and adults could accurately detect mispronunciations. In English, adults, and especially children, detected substitutions of native vowels (i.e., vowels that are present in the Dutch inventory) by non-native vowels more easily than changes in the opposite direction. Experiment 3 revealed that children could accurately discriminate most of the vowels. The results indicate that children's L1 categories strongly influenced their perception of English words. However, the data also reveal a hint of the development of L2 phoneme categories.

    Additional information

    Simon_SuppMaterial.pdf
  • Simpson, N. H., Addis, L., Brandler, W. M., Slonims, V., Clark, A., Watson, J., Scerri, T. S., Hennessy, E. R., Stein, J., Talcott, J., Conti-Ramsden, G., O'Hare, A., Baird, G., Fairfax, B. P., Knight, J. C., Paracchini, S., Fisher, S. E., Newbury, D. F., & The SLI Consortium (2014). Increased prevalence of sex chromosome aneuploidies in specific language impairment and dyslexia. Developmental Medicine and Child Neurology, 56, 346-353. doi:10.1111/dmcn.12294.

    Abstract

    Aim Sex chromosome aneuploidies increase the risk of spoken or written language disorders but individuals with specific language impairment (SLI) or dyslexia do not routinely undergo cytogenetic analysis. We assess the frequency of sex chromosome aneuploidies in individuals with language impairment or dyslexia. Method Genome-wide single nucleotide polymorphism genotyping was performed in three sample sets: a clinical cohort of individuals with speech and language deficits (87 probands: 61 males, 26 females; age range 4 to 23 years), a replication cohort of individuals with SLI, from both clinical and epidemiological samples (209 probands: 139 males, 70 females; age range 4 to 17 years), and a set of individuals with dyslexia (314 probands: 224 males, 90 females; age range 7 to 18 years). Results In the clinical language-impaired cohort, three abnormal karyotypic results were identified in probands (proband yield 3.4%). In the SLI replication cohort, six abnormalities were identified providing a consistent proband yield (2.9%). In the sample of individuals with dyslexia, two sex chromosome aneuploidies were found giving a lower proband yield of 0.6%. In total, two XYY, four XXY (Klinefelter syndrome), three XXX, one XO (Turner syndrome), and one unresolved karyotype were identified. Interpretation The frequency of sex chromosome aneuploidies within each of the three cohorts was increased over the expected population frequency (approximately 0.25%) suggesting that genetic testing may prove worthwhile for individuals with language and literacy problems and normal non-verbal IQ. Early detection of these aneuploidies can provide information and direct the appropriate management for individuals.
  • Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2011). Constraints on the processes responsible for the extrinsic normalization of vowels. Attention, Perception & Psychophysics, 73, 1195-1215. doi:10.3758/s13414-011-0096-8.

    Abstract

    Listeners tune in to talkers’ vowels through extrinsic normalization. We asked here whether this process could be based on compensation for the Long Term Average Spectrum (LTAS) of preceding sounds and whether the mechanisms responsible for normalization are indifferent to the nature of those sounds. If so, normalization should apply to nonspeech stimuli. Previous findings were replicated with first formant (F1) manipulations of speech. Targets on a [pIt]-[pEt] (low-high F1) continuum were labeled as [pIt] more after high-F1 than after low-F1 precursors. Spectrally-rotated nonspeech versions of these materials produced similar normalization. None occurred, however, with nonspeech stimuli that were less speech-like, even though precursor-target LTAS relations were equivalent to those used earlier. Additional experiments investigated the roles of pitch movement, amplitude variation, formant location, and the stimuli's perceived similarity to speech. It appears that normalization is not restricted to speech, but that the nature of the preceding sounds does matter. Extrinsic normalization of vowels is due at least in part to an auditory process which may require familiarity with the spectro-temporal characteristics of speech.
  • Sjerps, M. J., Mitterer, H., & McQueen, J. M. (2011). Listening to different speakers: On the time-course of perceptual compensation for vocal-tract characteristics. Neuropsychologia, 49, 3831-3846. doi:10.1016/j.neuropsychologia.2011.09.044.

    Abstract

    This study used an active multiple-deviant oddball design to investigate the time-course of normalization processes that help listeners deal with between-speaker variability. Electroencephalograms were recorded while Dutch listeners heard sequences of non-words (standards and occasional deviants). Deviants were [ɪ papu] or [ɛ papu], and the standard was [ɪɛpapu], where [ɪɛ] was a vowel that was ambiguous between [ɛ] and [ɪ]. These sequences were presented in two conditions, which differed with respect to the vocal-tract characteristics (i.e., the average 1st formant frequency) of the [papu] part, but not of the initial vowels [ɪ], [ɛ] or [ɪɛ] (these vowels were thus identical across conditions). Listeners more often detected a shift from [ɪɛpapu] to [ɛ papu] than from [ɪɛpapu] to [ɪ papu] in the high F1 context condition; the reverse was true in the low F1 context condition. This shows that listeners’ perception of vowels differs depending on the speaker‘s vocal-tract characteristics, as revealed in the speech surrounding those vowels. Cortical electrophysiological responses reflected this normalization process as early as about 120 ms after vowel onset, which suggests that shifts in perception precede influences due to conscious biases or decision strategies. Listeners’ abilities to normalize for speaker-vocal-tract properties are for an important part the result of a process that influences representations of speech sounds early in the speech processing stream.
  • Sjerps, M. J., & McQueen, J. M. (2010). The bounds on flexibility in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 36, 195-211. doi:10.1037/a0016803.
  • Skeide, M. A., Kumar, U., Mishra, R. K., Tripathi, V. N., Guleria, A., Singh, J. P., Eisner, F., & Huettig, F. (2017). Learning to read alters cortico-subcortical crosstalk in the visual system of illiterates. Science Advances, 5(3): e1602612. doi:10.1126/sciadv.1602612.

    Abstract

    Learning to read is known to result in a reorganization of the developing cerebral cortex. In this longitudinal resting-state functional magnetic resonance imaging study in illiterate adults we show that only 6 months of literacy training can lead to neuroplastic changes in the mature brain. We observed that literacy-induced neuroplasticity is not confined to the cortex but increases the functional connectivity between the occipital lobe and subcortical areas in the midbrain and
    the thalamus. Individual rates of connectivity increase were significantly related to the individualdecoding skill gains. These findings crucially complement current neurobiological concepts ofnormal and impaired literacy acquisition.
  • Skiba, R., Wittenburg, F., & Trilsbeek, P. (2004). New DoBeS web site: Contents & functions. Language Archive Newsletter, 1(2), 4-4.
  • Skirgard, H., Roberts, S. G., & Yencken, L. (2017). Why are some languages confused for others? Investigating data from the Great Language Game. PLoS One, 12(4): e0165934. doi:10.1371/journal.pone.0165934.

    Abstract

    In this paper we explore the results of a large-scale online game called ‘the Great Language Game’, in which people listen to an audio speech sample and make a forced-choice guess about the identity of the language from 2 or more alternatives. The data include 15 million guesses from 400 audio recordings of 78 languages. We investigate which languages are confused for which in the game, and if this correlates with the similarities that linguists identify between languages. This includes shared lexical items, similar sound inventories and established historical relationships. Our findings are, as expected, that players are more likely to confuse two languages that are objectively more similar. We also investigate factors that may affect players’ ability to accurately select the target language, such as how many people speak the language, how often the language is mentioned in written materials and the economic power of the target language community. We see that non-linguistic factors affect players’ ability to accurately identify the target. For example, languages with wider ‘global reach’ are more often identified correctly. This suggests that both linguistic and cultural knowledge influence the perception and recognition of languages and their similarity.
  • Skoruppa, K., Cristia, A., Peperkamp, S., & Seidl, A. (2011). English-learning infants' perception of word stress patterns [JASA Express Letter]. Journal of the Acoustical Society of America, 130(1), EL50-EL55. doi:10.1121/1.3590169.

    Abstract

    Adult speakers of different free stress languages (e.g., English, Spanish) differ both in their sensitivity to lexical stress and in their processing of suprasegmental and vowel quality cues to stress. In a head-turn preference experiment with a familiarization phase, both 8-month-old and 12-month-old English-learning infants discriminated between initial stress and final stress among lists of Spanish-spoken disyllabic nonwords that were segmentally varied (e.g. [ˈnila, ˈtuli] vs [luˈta, puˈki]). This is evidence that English-learning infants are sensitive to lexical stress patterns, instantiated primarily by suprasegmental cues, during the second half of the first year of life.
  • Slobin, D. I., Ibarretxe-Antuñano, I., Kopecka, A., & Majid, A. (2014). Manners of human gait: A crosslinguistic event-naming study. Cognitive Linguistics, 25, 701-741. doi:10.1515/cog-2014-0061.

    Abstract

    Crosslinguistic studies of expressions of motion events have found that Talmy's binary typology of verb-framed and satellite-framed languages is reflected in language use. In particular, Manner of motion is relatively more elaborated in satellite-framed languages (e.g., in narrative, picture description, conversation, translation). The present research builds on previous controlled studies of the domain of human motion by eliciting descriptions of a wide range of manners of walking and running filmed in natural circumstances. Descriptions were elicited from speakers of two satellite-framed languages (English, Polish) and three verb-framed languages (French, Spanish, Basque). The sampling of events in this study resulted in four major semantic clusters for these five languages: walking, running, non-canonical gaits (divided into bounce-and-recoil and syncopated movements), and quadrupedal movement (crawling). Counts of verb types found a broad tendency for satellite-framed languages to show greater lexical diversity, along with substantial within group variation. Going beyond most earlier studies, we also examined extended descriptions of manner of movement, isolating types of manner. The following categories of manner were identified and compared: attitude of actor, rate, effort, posture, and motor patterns of legs and feet. Satellite-framed speakers tended to elaborate expressive manner verbs, whereas verb-framed speakers used modification to add manner to neutral motion verbs
  • Slonimska, A., & Roberts, S. G. (2017). A case for systematic sound symbolism in pragmatics: Universals in wh-words. Journal of Pragmatics, 116, 1-20. doi:10.1016/j.pragma.2017.04.004.

    Abstract

    This study investigates whether there is a universal tendency for content
    interrogative words (wh-­words) within a language to sound similar in order to facilitate
    pragmatic inference in conversation. Gaps between turns in conversation are very
    short, meaning that listeners must begin planning their turn as soon as possible.
    While previous research has shown that paralinguistic features such as prosody and
    eye gaze provide cues to the pragmatic function of upcoming turns, we hypothesise
    that a systematic phonetic cue that marks interrogative words would also help early
    recognition of questions (allowing early preparation of answers), for instance wh-­
    words sounding similar within a language. We analyzed 226 languages from 66
    different language families by means of permutation tests. We found that initial
    segments of wh-­words were more similar within a language than between languages,
    also when controlling for language family, geographic area (stratified permutation)
    and analyzability (compound phrases excluded). Random samples tests revealed that
    initial segments of wh-­words were more similar than initial segments of randomly
    selected word sets and conceptually related word sets (e.g., body parts, actions,
    pronouns). Finally, we hypothesized that this cue would be more useful at the
    beginning of a turn, so the similarity of the initial segment of wh-­words should be
    greater in languages that place them at the beginning of a clause. We gathered
    typological data on 110 languages, and found the predicted trend, although statistical
    significance was not attained. While there may be several mechanisms that bring
    about this pattern (e.g., common derivation), we suggest that the ultimate explanation
    of the similarity of interrogative words is to facilitate early speech-­act recognition.
    Importantly, this hypothesis can be tested empirically, and the current results provide
    a sound basis for future experimental tests.
  • Small, S. L., Hickok, G., Nusbaum, H. C., Blumstein, S., Coslett, H. B., Dell, G., Hagoort, P., Kutas, M., Marantz, A., Pylkkanen, L., Thompson-Schill, S., Watkins, K., & Wise, R. J. (2011). The neurobiology of language: Two years later [Editorial]. Brain and Language, 116(3), 103-104. doi:10.1016/j.bandl.2011.02.004.
  • Smeets, C. J. L. M., & Verbeek, D. (2014). Review Cerebellar ataxia and functional genomics: Identifying the routes to cerebellar neurodegeneration. Biochimica et Biophysica Acta: BBA, 1842(10), 2030-2038. doi:10.1016/j.bbadis.2014.04.004.

    Abstract

    Cerebellar ataxias are progressive neurodegenerative disorders characterized by atrophy of the cerebellum leading to motor dysfunction, balance problems, and limb and gait ataxia. These include among others, the dominantly inherited spinocerebellar ataxias, recessive cerebellar ataxias such as Friedreich's ataxia, and X-linked cerebellar ataxias. Since all cerebellar ataxias display considerable overlap in their disease phenotypes, common pathological pathways must underlie the selective cerebellar neurodegeneration. Therefore, it is important to identify the molecular mechanisms and routes to neurodegeneration that cause cerebellar ataxia. In this review, we discuss the use of functional genomic approaches including whole-exome sequencing, genome-wide gene expression profiling, miRNA profiling, epigenetic profiling, and genetic modifier screens to reveal the underlying pathogenesis of various cerebellar ataxias. These approaches have resulted in the identification of many disease genes, modifier genes, and biomarkers correlating with specific stages of the disease. This article is part of a Special Issue entitled: From Genome to Function.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2017). The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration. Journal of Memory and Language, 93, 276-303. doi:10.1016/j.jml.2016.08.005.

    Abstract

    Ambiguity in natural language is ubiquitous, yet spoken communication is effective due to integration of information carried in the speech signal with information available in the surrounding multimodal landscape. Language mediated visual attention requires visual and linguistic information integration and has thus been used to examine properties of the architecture supporting multimodal processing during spoken language comprehension. In this paper we test predictions generated by alternative models of this multimodal system. A model (TRACE) in which multimodal information is combined at the point of the lexical representations of words generated predictions of a stronger effect of phonological rhyme relative to semantic and visual information on gaze behaviour, whereas a model in which sub-lexical information can interact across modalities (MIM) predicted a greater influence of visual and semantic information, compared to phonological rhyme. Two visual world experiments designed to test these predictions offer support for sub-lexical multimodal interaction during online language processing.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Literacy effects on language and vision: Emergent effects from an amodal shared resource (ASR) computational model. Cognitive Psychology, 75, 28-54. doi:10.1016/j.cogpsych.2014.07.002.

    Abstract

    Learning to read and write requires an individual to connect additional orthographic representations to pre-existing mappings between phonological and semantic representations of words. Past empirical results suggest that the process of learning to read and write (at least in alphabetic languages) elicits changes in the language processing system, by either increasing the cognitive efficiency of mapping between representations associated with a word, or by changing the granularity of phonological processing of spoken language, or through a combination of both. Behavioural effects of literacy have typically been assessed in offline explicit tasks that have addressed only phonological processing. However, a recent eye tracking study compared high and low literate participants on effects of phonology and semantics in processing measured implicitly using eye movements. High literates’ eye movements were more affected by phonological overlap in online speech than low literates, with only subtle differences observed in semantics. We determined whether these effects were due to cognitive efficiency and/or granularity of speech processing in a multimodal model of speech processing – the amodal shared resource model (ASR, Smith, Monaghan, & Huettig, 2013). We found that cognitive efficiency in the model had only a marginal effect on semantic processing and did not affect performance for phonological processing, whereas fine-grained versus coarse-grained phonological representations in the model simulated the high/low literacy effects on phonological processing, suggesting that literacy has a focused effect in changing the grain-size of phonological mappings.
  • Smits, R. (1998). A model for dependencies in phonetic categorization. Proceedings of the 16th International Congress on Acoustics and the 135th Meeting of the Acoustical Society of America, 2005-2006.

    Abstract

    A quantitative model of human categorization behavior is proposed, which can be applied to 4-alternative forced-choice categorization data involving two binary classifications. A number of processing dependencies between the two classifications are explicitly formulated, such as the dependence of the location, orientation, and steepness of the class boundary for one classification on the outcome of the other classification. The significance of various types of dependencies can be tested statistically. Analyses of a data set from the literature shows that interesting dependencies in human speech recognition can be uncovered using the model.
  • Snijders, T. M., Petersson, K. M., & Hagoort, P. (2010). Effective connectivity of cortical and subcortical regions during unification of sentence structure. NeuroImage, 52, 1633-1644. doi:10.1016/j.neuroimage.2010.05.035.

    Abstract

    In a recent fMRI study we showed that left posterior middle temporal gyrus (LpMTG) subserves the retrieval of a word's lexical-syntactic properties from the mental lexicon (long-term memory), while left posterior inferior frontal gyrus (LpIFG) is involved in unifying (on-line integration of) this information into a sentence structure (Snijders et al., 2009). In addition, the right IFG, right MTG, and the right striatum were involved in the unification process. Here we report results from a psychophysical interactions (PPI) analysis in which we investigated the effective connectivity between LpIFG and LpMTG during unification, and how the right hemisphere areas and the striatum are functionally connected to the unification network. LpIFG and LpMTG both showed enhanced connectivity during the unification process with a region slightly superior to our previously reported LpMTG. Right IFG better predicted right temporal activity when unification processes were more strongly engaged, just as LpIFG better predicted left temporal activity. Furthermore, the striatum showed enhanced coupling to LpIFG and LpMTG during unification. We conclude that bilateral inferior frontal and posterior temporal regions are functionally connected during sentence-level unification. Cortico-subcortical connectivity patterns suggest cooperation between inferior frontal and striatal regions in performing unification operations on lexical-syntactic representations retrieved from LpMTG.
  • Snowdon, C. T., Pieper, B. A., Boe, C. Y., Cronin, K. A., Kurian, A. V., & Ziegler, T. E. (2010). Variation in oxytocin is related to variation in affiliative behavior in monogamous, pairbonded tamarins. Hormones and Behavior, 58(4), 614-618. doi:10.1016/j.yhbeh.2010.06.014.

    Abstract

    Oxytocin plays an important role in monogamous pairbonded female voles, but not in polygamous voles. Here we examined a socially monogamous cooperatively breeding primate where both sexes share in parental care and territory defense for within species variation in behavior and female and male oxytocin levels in 14 pairs of cotton-top tamarins (Saguinus oedipus). In order to obtain a stable chronic assessment of hormones and behavior, we observed behavior and collected urinary hormonal samples across the tamarins’ 3-week ovulatory cycle. We found similar levels of urinary oxytocin in both sexes. However, basal urinary oxytocin levels varied 10-fold across pairs and pair-mates displayed similar oxytocin levels. Affiliative behavior (contact, grooming, sex) also varied greatly across the sample and explained more than half the variance in pair oxytocin levels. The variables accounting for variation in oxytocin levels differed by sex. Mutual contact and grooming explained most of the variance in female oxytocin levels, whereas sexual behavior explained most of the variance in male oxytocin levels. The initiation of contact by males and solicitation of sex by females were related to increased levels of oxytocin in both. This study demonstrates within-species variation in oxytocin that is directly related to levels of affiliative and sexual behavior. However, different behavioral mechanisms influence oxytocin levels in males and females and a strong pair relationship (as indexed by high levels of oxytocin) may require the activation of appropriate mechanisms for both sexes.
  • Sollis, E., Deriziotis, P., Saitsu, H., Miyake, N., Matsumoto, N., J.V.Hoffer, M. J. V., Ruivenkamp, C. A., Alders, M., Okamoto, N., Bijlsma, E. K., Plomp, A. S., & Fisher, S. E. (2017). Equivalent missense variant in the FOXP2 and FOXP1 transcription factors causes distinct neurodevelopmental disorders. Human Mutation, 38(11), 1542-1554. doi:10.1002/humu.23303.

    Abstract

    The closely related paralogues FOXP2 and FOXP1 encode transcription factors with shared functions in the development of many tissues, including the brain. However, while mutations in FOXP2 lead to a speech/language disorder characterized by childhood apraxia of speech (CAS), the clinical profile of FOXP1 variants includes a broader neurodevelopmental phenotype with global developmental delay, intellectual disability and speech/language impairment. Using clinical whole-exome sequencing, we report an identical de novo missense FOXP1 variant identified in three unrelated patients. The variant, p.R514H, is located in the forkhead-box DNA-binding domain and is equivalent to the well-studied p.R553H FOXP2 variant that co-segregates with CAS in a large UK family. We present here for the first time a direct comparison of the molecular and clinical consequences of the same mutation affecting the equivalent residue in FOXP1 and FOXP2. Detailed functional characterization of the two variants in cell model systems revealed very similar molecular consequences, including aberrant subcellular localization, disruption of transcription factor activity and deleterious effects on protein interactions. Nonetheless, clinical manifestations were broader and more severe in the three cases carrying the p.R514H FOXP1 variant than in individuals with the p.R553H variant related to CAS, highlighting divergent roles of FOXP2 and FOXP1 in neurodevelopment.

    Additional information

    humu23303-sup-0001-SuppMat.pdf
  • De Sousa, H. (2011). Changes in the language of perception in Cantonese. The Senses & Society, 6(1), 38-47. doi:10.2752/174589311X12893982233678.

    Abstract

    The way a language encodes sensory experiences changes over time, and often this correlates with other changes in the society. There are noticeable differences in the language of perception between older and younger speakers of Cantonese in Hong Kong and Macau. Younger speakers make finer distinctions in the distal senses, but have less knowledge of the finer categories of the proximal senses than older speakers. The difference in the language of perception between older and younger speakers probably reflects the rapid changes that happened in Hong Kong and Macau in the last fifty years, from an underdeveloped and lessliterate society, to a developed and highly literate society. In addition to the increase in literacy, the education system has also undergone significant Westernization. Western-style education systems have most likely created finer categorizations in the distal senses. At the same time, the traditional finer distinctions of the proximal senses have become less salient: as the society became more urbanized and sanitized, people have had fewer opportunities to experience the variety of olfactory sensations experienced by their ancestors. This case study investigating interactions between social-economic 'development' and the elaboration of the senses hopefully contributes to the study of the ineffability of senses.
  • Soutschek, A., Burke, C. J., Beharelle, A. R., Schreiber, R., Weber, S. C., Karipidis, I. I., Ten Velden, J., Weber, B., Haker, H., Kalenscher, T., & Tobler, P. N. (2017). The dopaminergic reward system underpins gender differences in social preferences. Nature Human Behaviour, 1, 819-827. doi:10.1038/s41562-017-0226-y.

    Abstract

    Women are known to have stronger prosocial preferences than men, but it remains an open question as to how these behavioural differences arise from differences in brain functioning. Here, we provide a neurobiological account for the hypothesized gender difference. In a pharmacological study and an independent neuroimaging study, we tested the hypothesis that the neural reward system encodes the value of sharing money with others more strongly in women than in men. In the pharmacological study, we reduced receptor type-specific actions of dopamine, a neurotransmitter related to reward processing, which resulted in more selfish decisions in women and more prosocial decisions in men. Converging findings from an independent neuroimaging study revealed gender-related activity in neural reward circuits during prosocial decisions. Thus, the neural reward system appears to be more sensitive to prosocial rewards in women than in men, providing a neurobiological account for why women often behave more prosocially than men.

    A large body of evidence suggests that women are often more prosocial (for example, generous, altruistic and inequality averse) than men, at least when other factors such as reputation and strategic considerations are excluded1,2,3. This dissociation could result from cultural expectations and gender stereotypes, because in Western societies women are more strongly expected to be prosocial4,5,6 and sensitive to variations in social context than men1. It remains an open question, however, whether and how on a neurobiological level the social preferences of women and men arise from differences in brain functioning. The assumption of gender differences in social preferences predicts that the neural reward system’s sensitivity to prosocial and selfish rewards should differ between women and men. Specifically, the hypothesis would be that the neural reward system is more sensitive to prosocial than selfish rewards in women and more sensitive to selfish than prosocial rewards in men. The goal of the current study was to test in two independent experiments for the hypothesized gender differences on both a pharmacological and a haemodynamic level. In particular, we examined the functions of the neurotransmitter dopamine using a dopamine receptor antagonist, and the role of the striatum (a brain region strongly innervated by dopamine neurons) during social decision-making in women and men using neuroimaging.

    The neurotransmitter dopamine is thought to play a key role in neural reward processing7,8. Recent evidence suggests that dopaminergic activity is sensitive not only to rewards for oneself but to rewards for others as well9. The assumption that dopamine is sensitive to both self- and other-related outcomes is consistent with the finding that the striatum shows activation for both selfish and shared rewards10,11,12,13,14,15. The dopaminergic response may represent a net signal encoding the difference between the value of preferred and unpreferred rewards8. Regarding the hypothesized gender differences in social preferences, this account makes the following predictions. If women prefer shared (prosocial) outcomes2, women’s dopaminergic signals to shared rewards will be stronger than to non-shared (selfish) rewards, so reducing dopaminergic activity should bias women to make more selfish decisions. In line with this hypothesis, a functional imaging study reported enhanced striatal activation in female participants during charitable donations11. In contrast, if men prefer selfish over prosocial rewards, dopaminergic activity should be enhanced to selfish compared to prosocial rewards. In line with this view, upregulating dopaminergic activity in a sample of exclusively male participants increased selfish behaviour in a bargaining game16. Thus, contrary to the hypothesized effect in women, reducing dopaminergic neurotransmission should render men more prosocial. Taken together, the current study tested the following three predictions: we expected the dopaminergic reward system (1) to be more sensitive to prosocial than selfish rewards in women and (2) to be more sensitive to selfish than prosocial rewards in men. As a consequence of these two predictions, we also predicted (3) dopaminoceptive regions such as the striatum to show stronger activation to prosocial relative to selfish rewards in women than in men.

    To test these predictions, we conducted a pharmacological study in which we reduced dopaminergic neurotransmission with amisulpride. Amisulpride is a dopamine antagonist that is highly specific for dopaminergic D2/D3 receptors17. After receiving amisulpride or placebo, participants performed an interpersonal decision task18,19,20, in which they made choices between a monetary reward only for themselves (selfish reward option) and sharing money with others (prosocial reward option). We expected that blocking dopaminergic neurotransmission with amisulpride, relative to placebo, would result in fewer prosocial choices in women and more prosocial choices in men. To investigate whether potential gender-related effects of dopamine are selective for social decision-making, we also tested the effects of amisulpride on time preferences in a non-social control task that was matched to the interpersonal decision task in terms of choice structure.

    In addition, because dopaminergic neurotransmission plays a crucial role in brain regions involved in value processing, such as the striatum21, a gender-related role of dopaminergic activity for social decision-making should also be reflected by dissociable activity patterns in the striatum. Therefore, to further test our hypothesis, we investigated the neural correlates of social decision-making in a functional imaging study. In line with our predictions for the pharmacological study, we expected to find stronger striatum activity during prosocial relative to selfish decisions in women, whereas men should show enhanced activity in the striatum for selfish relative to prosocial choices.

    Additional information

    Supplementary Information
  • Spada, D., Verga, L., Iadanza, A., Tettamanti, M., & Perani, D. (2014). The auditory scene: An fMRI study on melody and accompaniment in professional pianists. NeuroImage, 102(2), 764-775. doi:10.1016/j.neuroimage.2014.08.036.

    Abstract

    The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes.
  • Speed, L. J., & Majid, A. (2017). Dutch modality exclusivity norms: Simulating perceptual modality in space. Behavior Research Methods, 49(6), 2204-2218. doi:10.3758/s13428-017-0852-3.

    Abstract

    Perceptual information is important for the meaning of nouns. We present modality exclusivity norms for 485 Dutch nouns rated on visual, auditory, haptic, gustatory, and olfactory associations. We found these nouns are highly multimodal. They were rated most dominant in vision, and least in olfaction. A factor analysis identified two main dimensions: one loaded strongly on olfaction and gustation (reflecting joint involvement in flavor), and a second loaded strongly on vision and touch (reflecting joint involvement in manipulable objects). In a second study, we validated the ratings with similarity judgments. As expected, words from the same dominant modality were rated more similar than words from different dominant modalities; but – more importantly – this effect was enhanced when word pairs had high modality strength ratings. We further demonstrated the utility of our ratings by investigating whether perceptual modalities are differentially experienced in space, in a third study. Nouns were categorized into their dominant modality and used in a lexical decision experiment where the spatial position of words was either in proximal or distal space. We found words dominant in olfaction were processed faster in proximal than distal space compared to the other modalities, suggesting olfactory information is mentally simulated as “close” to the body. Finally, we collected ratings of emotion (valence, dominance, and arousal) to assess its role in perceptual space simulation, but the valence did not explain the data. So, words are processed differently depending on their perceptual associations, and strength of association is captured by modality exclusivity ratings.

    Additional information

    13428_2017_852_MOESM1_ESM.xlsx
  • Stergiakouli, E., Martin, J., Hamshere, M. L., Heron, J., St Pourcain, B., Timpson, N. J., Thapar, A., & Smith, G. D. (2017). Association between polygenic risk scores for attention-deficit hyperactivity disorder and educational and cognitive outcomes in the general population. International Journal of Epidemiology, 46(2), 421-428. doi:10.1093/ije/dyw216.

    Abstract

    Background: Children with a diagnosis of attention-deficit hyperactivity disorder (ADHD) have lower cognitive ability and are at risk of adverse educational outcomes; ADHD genetic risks have been found to predict childhood cognitive ability and other neurodevelopmental traits in the general population; thus genetic risks might plausibly also contribute to cognitive ability later in development and to educational underachievement.

    Methods: We generated ADHD polygenic risk scores in the Avon Longitudinal Study of Parents and Children participants (maximum N: 6928 children and 7280 mothers) based on the results of a discovery clinical sample, a genome-wide association study of 727 cases with ADHD diagnosis and 5081 controls. We tested if ADHD polygenic risk scores were associated with educational outcomes and IQ in adolescents and their mothers.

    Results: High ADHD polygenic scores in adolescents were associated with worse educational outcomes at Key Stage 3 [national tests conducted at age 13–14 years; β = −1.4 (−2.0 to −0.8), P = 2.3 × 10−6), at General Certificate of Secondary Education exams at age 15–16 years (β = −4.0 (−6.1 to −1.9), P = 1.8 × 10−4], reduced odds of sitting Key Stage 5 examinations at age 16–18 years [odds ratio (OR) = 0.90 (0.88 to 0.97), P = 0.001] and lower IQ scores at age 15.5 [β = −0.8 (−1.2 to −0.4), P = 2.4 × 10−4]. Moreover, maternal ADHD polygenic scores were associated with lower maternal educational achievement [β = −0.09 (−0.10 to −0.06), P = 0.005] and lower maternal IQ [β = −0.6 (−1.2 to −0.1), P = 0.03].

    Conclusions: ADHD diagnosis risk alleles impact on functional outcomes in two generations (mother and child) and likely have intergenerational environmental effects.
  • Stergiakouli, E., Gaillard, R., Tavaré, J. M., Balthasar, N., Loos, R. J., Taal, H. R., Evans, D. M., Rivadeneira, F., St Pourcain, B., Uitterlinden, A. G., Kemp, J. P., Hofman, A., Ring, S. M., Cole, T. J., Jaddoe, V. W. V., Davey Smith, G., & Timpson, N. J. (2014). Genome-wide association study of height-adjusted BMI in childhood identifies functional variant in ADCY3. Obesity, 22(10), 2252-2259. doi:10.1002/oby.20840.

    Abstract

    OBJECTIVE: Genome-wide association studies (GWAS) of BMI are mostly undertaken under the assumption that "kg/m(2) " is an index of weight fully adjusted for height, but in general this is not true. The aim here was to assess the contribution of common genetic variation to a adjusted version of that phenotype which appropriately accounts for covariation in height in children. METHODS: A GWAS of height-adjusted BMI (BMI[x] = weight/height(x) ), calculated to be uncorrelated with height, in 5809 participants (mean age 9.9 years) from the Avon Longitudinal Study of Parents and Children (ALSPAC) was performed. RESULTS: GWAS based on BMI[x] yielded marked differences in genomewide results profile. SNPs in ADCY3 (adenylate cyclase 3) were associated at genome-wide significance level (rs11676272 (0.28 kg/m(3.1) change per allele G (0.19, 0.38), P = 6 × 10(-9) ). In contrast, they showed marginal evidence of association with conventional BMI [rs11676272 (0.25 kg/m(2) (0.15, 0.35), P = 6 × 10(-7) )]. Results were replicated in an independent sample, the Generation R study. CONCLUSIONS: Analysis of BMI[x] showed differences to that of conventional BMI. The association signal at ADCY3 appeared to be driven by a missense variant and it was strongly correlated with expression of this gene. Our work highlights the importance of well understood phenotype use (and the danger of convention) in characterising genetic contributions to complex traits.

    Additional information

    oby20840-sup-0001-suppinfo.docx
  • Stergiakouli, E., Smith, G. D., Martin, J., Skuse, D. H., Viechtbauer, W., Ring, S. M., Ronald, A., Evans, D. E., Fisher, S. E., Thapar, A., & St Pourcain, B. (2017). Shared genetic influences between dimensional ASD and ADHD symptoms during child and adolescent development. Molecular Autism, 8: 18. doi:10.1186/s13229-017-0131-2.

    Abstract

    Background: Shared genetic influences between attention-deficit/hyperactivity disorder (ADHD) symptoms and
    autism spectrum disorder (ASD) symptoms have been reported. Cross-trait genetic relationships are, however,
    subject to dynamic changes during development. We investigated the continuity of genetic overlap between ASD
    and ADHD symptoms in a general population sample during childhood and adolescence. We also studied uni- and
    cross-dimensional trait-disorder links with respect to genetic ADHD and ASD risk.
    Methods: Social-communication difficulties (N ≤ 5551, Social and Communication Disorders Checklist, SCDC) and
    combined hyperactive-impulsive/inattentive ADHD symptoms (N ≤ 5678, Strengths and Difficulties Questionnaire,
    SDQ-ADHD) were repeatedly measured in a UK birth cohort (ALSPAC, age 7 to 17 years). Genome-wide summary
    statistics on clinical ASD (5305 cases; 5305 pseudo-controls) and ADHD (4163 cases; 12,040 controls/pseudo-controls)
    were available from the Psychiatric Genomics Consortium. Genetic trait variances and genetic overlap between
    phenotypes were estimated using genome-wide data.
    Results: In the general population, genetic influences for SCDC and SDQ-ADHD scores were shared throughout
    development. Genetic correlations across traits reached a similar strength and magnitude (cross-trait rg ≤ 1,
    pmin = 3 × 10−4) as those between repeated measures of the same trait (within-trait rg ≤ 0.94, pmin = 7 × 10−4).
    Shared genetic influences between traits, especially during later adolescence, may implicate variants in K-RAS signalling
    upregulated genes (p-meta = 6.4 × 10−4).
    Uni-dimensionally, each population-based trait mapped to the expected behavioural continuum: risk-increasing alleles
    for clinical ADHD were persistently associated with SDQ-ADHD scores throughout development (marginal regression
    R2 = 0.084%). An age-specific genetic overlap between clinical ASD and social-communication difficulties during
    childhood was also shown, as per previous reports. Cross-dimensionally, however, neither SCDC nor SDQ-ADHD scores
    were linked to genetic risk for disorder.
    Conclusions: In the general population, genetic aetiologies between social-communication difficulties and ADHD
    symptoms are shared throughout child and adolescent development and may implicate similar biological pathways
    that co-vary during development. Within both the ASD and the ADHD dimension, population-based traits are also linked
    to clinical disorder, although much larger clinical discovery samples are required to reliably detect cross-dimensional
    trait-disorder relationships.
  • Stine-Morrow, E., Payne, B., Roberts, B., Kramer, A., Morrow, D., Payne, L., Hill, P., Jackson, J., Gao, X., Noh, S., Janke, M., & Parisi, J. (2014). Training versus engagement as paths to cognitive enrichment with aging. Psychology and Aging, 29, 891-906. doi:10.1037/a0038244.

    Abstract

    While a training model of cognitive intervention targets the improvement of particular skills through instruction and practice, an engagement model is based on the idea that being embedded in an intellectually and socially complex environment can impact cognition, perhaps even broadly, without explicit instruction. We contrasted these 2 models of cognitive enrichment by randomly assigning healthy older adults to a home-based inductive reasoning training program, a team-based competitive program in creative problem solving, or a wait-list control. As predicted, those in the training condition showed selective improvement in inductive reasoning. Those in the engagement condition, on the other hand, showed selective improvement in divergent thinking, a key ability exercised in creative problem solving. On average, then, both groups appeared to show ability-specific effects. However, moderators of change differed somewhat for those in the engagement and training interventions. Generally, those who started either intervention with a more positive cognitive profile showed more cognitive growth, suggesting that cognitive resources enabled individuals to take advantage of environmental enrichment. Only in the engagement condition did initial levels of openness and social network size moderate intervention effects on cognition, suggesting that comfort with novelty and an ability to manage social resources may be additional factors contributing to the capacity to take advantage of the environmental complexity associated with engagement. Collectively, these findings suggest that training and engagement models may offer alternative routes to cognitive resilience in late life

    Files private

    Request files
  • Stivers, T. (2004). Potilaan vastarinta: Keino vaikuttaa lääkärin hoitopäätökseen. Sosiaalilääketieteellinen Aikakauslehti, 41, 199-213.
  • Stivers, T., & Rossano, F. (2010). A scalar view of response relevance. Research on Language and Social Interaction, 43, 49-56. doi:10.1080/08351810903471381.
  • Stivers, T. (2010). An overview of the question-response system in American English conversation. Journal of Pragmatics, 42, 2772-2781. doi:10.1016/j.pragma.2010.04.011.

    Abstract

    This article, part of a 10 language comparative project on question–response sequences, discusses these sequences in American English conversation. The data are video-taped spontaneous naturally occurring conversations involving two to five adults. Relying on these data I document the basic distributional patterns of types of questions asked (polar, Q-word or alternative as well as sub-types), types of social actions implemented by these questions (e.g., repair initiations, requests for confirmation, offers or requests for information), and types of responses (e.g., repetitional answers or yes/no tokens). I show that declarative questions are used more commonly in conversation than would be suspected by traditional grammars of English and questions are used for a wider range of functions than grammars would suggest. Finally, this article offers distributional support for the idea that responses that are better “fitted” with the question are preferred.
  • Stivers, T., & Enfield, N. J. (2010). A coding scheme for question-response sequences in conversation. Journal of Pragmatics, 42, 2620-2626. doi:10.1016/j.pragma.2010.04.002.

    Abstract

    no abstract is available for this article
  • Stivers, T. (2004). "No no no" and other types of multiple sayings in social interaction. Human Communication Research, 30(2), 260-293. doi:10.1111/j.1468-2958.2004.tb00733.x.

    Abstract

    Relying on the methodology of conversation analysis, this article examines a practice in ordinary conversation characterized by the resaying of a word, phrase, or sentence. The article shows that multiple sayings such as "No no no" or "Alright alright alright" are systematic in both their positioning relative to the interlocutor's talk and in their function. Specifically, the findings are that multiple sayings are a resource speakers have to display that their turn is addressing an in progress course of action rather than only the just prior utterance. Speakers of multiple sayings communicate their stance that the prior speaker has persisted unnecessarily in the prior course of action and should properly halt course of action.
  • Stivers, T., & Rossano, F. (2010). Mobilizing response. Research on Language and Social Interaction, 43, 3-31. doi:10.1080/08351810903471258.

    Abstract

    A fundamental puzzle in the organization of social interaction concerns how one individual elicits a response from another. This article asks what it is about some sequentially initial turns that reliably mobilizes a coparticipant to respond and under what circumstances individuals are accountable for producing a response. Whereas a linguistic approach suggests that this is what oquestionso (more generally) and interrogativity (more narrowly) are for, a sociological approach to social interaction suggests that the social action a person is implementing mobilizes a recipient's response. We find that although both theories have merit, neither adequately solves the puzzle. We argue instead that different actions mobilize response to different degrees. Speakers then design their turns to perform actions, and with particular response-mobilizing features of turn-design speakers can hold recipients more accountable for responding or not. This model of response relevance allows sequential position, action, and turn design to each contribute to response relevance.
  • Stivers, T., Enfield, N. J., & Levinson, S. C. (Eds.). (2010). Question-response sequences in conversation across ten languages [Special Issue]. Journal of Pragmatics, 42(10). doi:10.1016/j.pragma.2010.04.001.
  • Stivers, T., Enfield, N. J., & Levinson, S. C. (2010). Question-response sequences in conversation across ten languages: An introduction. Journal of Pragmatics, 42, 2615-2619. doi:10.1016/j.pragma.2010.04.001.
  • Stivers, T. (1998). Prediagnostic commentary in veterinarian-client interaction. Research on Language and Social Interaction, 31(2), 241-277. doi:10.1207/s15327973rlsi3102_4.
  • Stivers, T., & Hayashi, M. (2010). Transformative answers: One way to resist a question's constraints. Language in Society, 39, 1-25. doi:10.1017/S0047404509990637.

    Abstract

    A number of Conversation Analytic studies have documented that question recipients have a variety of ways to push against the constraints that questions impose on them. This article explores the concept of transformative answers – answers through which question recipients retroactively adjust the question posed to them. Two main sorts of adjustments are discussed: question term transformations and question agenda transformations. It is shown that the operations through which interactants implement term transformations are different from the operations through which they implement agenda transformations. Moreover, term-transforming answers resist only the question’s design, while agenda-transforming answers effectively resist both design and agenda, thus implying that agenda-transforming answers resist more strongly than design-transforming answers. The implications of these different sorts of transformations for alignment and affiliation are then explored.
  • Stoehr, A., Benders, T., Van Hell, J. G., & Fikkert, P. (2017). Second language attainment and first language attrition: The case of VOT in immersed Dutch–German late bilinguals. Second Language Research, 33(4), 483-518. doi:10.1177/0267658317704261.

    Abstract

    Speech of late bilinguals has frequently been described in terms of cross-linguistic influence (CLI) from the native language (L1) to the second language (L2), but CLI from the L2 to the L1 has received relatively little attention. This article addresses L2 attainment and L1 attrition in voicing systems through measures of voice onset time (VOT) in two groups of Dutch–German late bilinguals in the Netherlands. One group comprises native speakers of Dutch and the other group comprises native speakers of German, and the two groups further differ in their degree of L2 immersion. The L1-German–L2-Dutch bilinguals (N = 23) are exposed to their L2 at home and outside the home, and the L1-Dutch–L2-German bilinguals (N = 18) are only exposed to their L2 at home. We tested L2 attainment by comparing the bilinguals’ L2 to the other bilinguals’ L1, and L1 attrition by comparing the bilinguals’ L1 to Dutch monolinguals (N = 29) and German monolinguals (N = 27). Our findings indicate that complete L2 immersion may be advantageous in L2 acquisition, but at the same time it may cause L1 phonetic attrition. We discuss how the results match the predictions made by Flege’s Speech Learning Model and explore how far bilinguals’ success in acquiring L2 VOT and maintaining L1 VOT depends on the immersion context, articulatory constraints and the risk of sounding foreign accented.
  • Stolk, A., Noordzij, M. L., Verhagen, L., Volman, I., Schoffelen, J.-M., Oostenveld, R., Hagoort, P., & Toni, I. (2014). Cerebral coherence between communicators marks the emergence of meaning. Proceedings of the National Academy of Sciences of the United States of America, 111, 18183-18188. doi:10.1073/pnas.1414886111.

    Abstract

    How can we understand each other during communicative interactions? An influential suggestion holds that communicators are primed by each other’s behaviors, with associative mechanisms automatically coordinating the production of communicative signals and the comprehension of their meanings. An alternative suggestion posits that mutual understanding requires shared conceptualizations of a signal’s use, i.e., “conceptual pacts” that are abstracted away from specific experiences. Both accounts predict coherent neural dynamics across communicators, aligned either to the occurrence of a signal or to the dynamics of conceptual pacts. Using coherence spectral-density analysis of cerebral activity simultaneously measured in pairs of communicators, this study shows that establishing mutual understanding of novel signals synchronizes cerebral dynamics across communicators’ right temporal lobes. This interpersonal cerebral coherence occurred only within pairs with a shared communicative history, and at temporal scales independent from signals’ occurrences. These findings favor the notion that meaning emerges from shared conceptualizations of a signal’s use.
  • Ye, Z., Stolk, A., Toni, I., & Hagoort, P. (2017). Oxytocin modulates semantic integration in speech comprehension. Journal of Cognitive Neuroscience, 29, 267-276. doi:10.1162/jocn_a_01044.

    Abstract

    Listeners interpret utterances by integrating information from multiple sources including word level semantics and world knowledge. When the semantics of an expression is inconsistent with his or her knowledge about the world, the listener may have to search through the conceptual space for alternative possible world scenarios that can make the expression more acceptable. Such cognitive exploration requires considerable computational resources and might depend on motivational factors. This study explores whether and how oxytocin, a neuropeptide known to influence socialmotivation by reducing social anxiety and enhancing affiliative tendencies, can modulate the integration of world knowledge and sentence meanings. The study used a betweenparticipant double-blind randomized placebo-controlled design. Semantic integration, indexed with magnetoencephalography through the N400m marker, was quantified while 45 healthymale participants listened to sentences that were either congruent or incongruent with facts of the world, after receiving intranasally delivered oxytocin or placebo. Compared with congruent sentences, world knowledge incongruent sentences elicited a stronger N400m signal from the left inferior frontal and anterior temporal regions and medial pFC (the N400m effect) in the placebo group. Oxytocin administration significantly attenuated the N400meffect at both sensor and cortical source levels throughout the experiment, in a state-like manner. Additional electrophysiological markers suggest that the absence of the N400m effect in the oxytocin group is unlikely due to the lack of early sensory or semantic processing or a general downregulation of attention. These findings suggest that oxytocin drives listeners to resolve challenges of semantic integration, possibly by promoting the cognitive exploration of alternative possible world scenarios.
  • Stolk, A., Noordzij, M. L., Volman, I., Verhagen, L., Overeem, S., van Elswijk, G., Bloem, B., Hagoort, P., & Toni, I. (2014). Understanding communicative actions: A repetitive TMS study. Cortex, 51, 25-34. doi:10.1016/j.cortex.2013.10.005.

    Abstract

    Despite the ambiguity inherent in human communication, people are remarkably efficient in establishing mutual understanding. Studying how people communicate in novel settings provides a window into the mechanisms supporting the human competence to rapidly generate and understand novel shared symbols, a fundamental property of human communication. Previous work indicates that the right posterior superior temporal sulcus (pSTS) is involved when people understand the intended meaning of novel communicative actions. Here, we set out to test whether normal functioning of this cerebral structure is required for understanding novel communicative actions using inhibitory low-frequency repetitive transcranial magnetic stimulation (rTMS). A factorial experimental design contrasted two tightly matched stimulation sites (right pSTS vs. left MT+, i.e. a contiguous homotopic task-relevant region) and tasks (a communicative task vs. a visual tracking task that used the same sequences of stimuli). Overall task performance was not affected by rTMS, whereas changes in task performance over time were disrupted according to TMS site and task combinations. Namely, rTMS over pSTS led to a diminished ability to improve action understanding on the basis of recent communicative history, while rTMS over MT+ perturbed improvement in visual tracking over trials. These findings qualify the contributions of the right pSTS to human communicative abilities, showing that this region might be necessary for incorporating previous knowledge, accumulated during interactions with a communicative partner, to constrain the inferential process that leads to action understanding.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1998). Understanding ambiguous words in sentence contexts: Electrophysiological evidence for delayed contextual selection in Broca's aphasia. Neuropsychologia, 36(8), 737-761. doi:10.1016/S0028-3932(97)00174-7.

    Abstract

    This study investigates whether spoken sentence comprehension deficits in Broca's aphasics results from their inability to access the subordinate meaning of ambiguous words (e.g. bank), or alternatively, from a delay in their selection of the contextually appropriate meaning. Twelve Broca's aphasics and twelve elderly controls were presented with lexical ambiguities in three context conditions, each followed by the same target words. In the concordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was related to the target. In the discordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was incompatible with the target.In the unrelated condition, the sentence-final word was unambiguous and unrelated to the target. The task of the subjects was to listen attentively to the stimuli The activational status of the ambiguous sentence-final words was inferred from the amplitude of the N399 to the targets at two inter-stimulus intervals (ISIs) (100 ms and 1250 ms). At the short ISI, the Broca's aphasics showed clear evidence of activation of the subordinate meaning. In contrast to elderly controls, however, the Broca's aphasics were not successful at selecting the appropriate meaning of the ambiguity in the short ISI version of the experiment. But at the long ISI, in accordance with the performance of the elderly controls, the patients were able to successfully complete the contextual selection process. These results indicate that Broca's aphasics are delayed in the process of contextual selection. It is argued that this finding of delayed selection is compatible with the idea that comprehension deficits in Broca's aphasia result from a delay in the process of integrating lexical information.
  • Swift, M. (1998). [Book review of LOUIS-JACQUES DORAIS, La parole inuit: Langue, culture et société dans l'Arctique nord-américain]. Language in Society, 27, 273-276. doi:10.1017/S0047404598282042.

    Abstract

    This volume on Inuit speech follows the evolution of a native language of the North American Arctic, from its historical roots to its present-day linguistic structure and patterns of use from Alaska to Greenland. Drawing on a wide range of research from the fields of linguistics, anthropology, and sociology, Dorais integrates these diverse perspectives in a comprehensive view of native language development, maintenance, and use under conditions of marginalization due to social transition.
  • Tachmazidou, I., Süveges, D., Min, J. L., Ritchie, G. R. S., Steinberg, J., Walter, K., Iotchkova, V., Schwartzentruber, J., Huang, J., Memari, Y., McCarthy, S., Crawford, A. A., Bombieri, C., Cocca, M., Farmaki, A.-E., Gaunt, T. R., Jousilahti, P., Kooijman, M. N., Lehne, B., Malerba, G. and 83 moreTachmazidou, I., Süveges, D., Min, J. L., Ritchie, G. R. S., Steinberg, J., Walter, K., Iotchkova, V., Schwartzentruber, J., Huang, J., Memari, Y., McCarthy, S., Crawford, A. A., Bombieri, C., Cocca, M., Farmaki, A.-E., Gaunt, T. R., Jousilahti, P., Kooijman, M. N., Lehne, B., Malerba, G., Männistö, S., Matchan, A., Medina-Gomez, C., Metrustry, S. J., Nag, A., Ntalla, I., Paternoster, L., Rayner, N. W., Sala, C., Scott, W. R., Shihab, H. A., Southam, L., St Pourcain, B., Traglia, M., Trajanoska, K., Zaza, G., Zhang, W., Artigas, M. S., Bansal, N., Benn, M., Chen, Z., Danecek, P., Lin, W.-Y., Locke, A., Luan, J., Manning, A. K., Mulas, A., Sidore, C., Tybjaerg-Hansen, A., Varbo, A., Zoledziewska, M., Finan, C., Hatzikotoulas, K., Hendricks, A. E., Kemp, J. P., Moayyeri, A., Panoutsopoulou, K., Szpak, M., Wilson, S. G., Boehnke, M., Cucca, F., Di Angelantonio, E., Langenberg, C., Lindgren, C., McCarthy, M. I., Morris, A. P., Nordestgaard, B. G., Scott, R. A., Tobin, M. D., Wareham, N. J., Burton, P., Chambers, J. C., Smith, G. D., Dedoussis, G., Felix, J. F., Franco, O. H., Gambaro, G., Gasparini, P., Hammond, C. J., Hofman, A., Jaddoe, V. W. V., Kleber, M., Kooner, J. S., Perola, M., Relton, C., Ring, S. M., Rivadeneira, F., Salomaa, V., Spector, T. D., Stegle, O., Toniolo, D., Uitterlinden, A. G., Barroso, I., Greenwood, C. M. T., Perry, J. R. B., Walker, B. R., Butterworth, A. S., Xue, Y., Durbin, R., Small, K. S., Soranzo, N., Timpson, N. J., & Zeggini, E. (2017). Whole-Genome Sequencing coupled to imputation discovers genetic signals for anthropometric traits. The American Journal of Human Genetics, 100(6), 865-884. doi:10.1016/j.ajhg.2017.04.014.

    Abstract

    Deep sequence-based imputation can enhance the discovery power of genome-wide association studies by assessing previously unexplored variation across the common- and low-frequency spectra. We applied a hybrid whole-genome sequencing (WGS) and deep imputation approach to examine the broader allelic architecture of 12 anthropometric traits associated with height, body mass, and fat distribution in up to 267,616 individuals. We report 106 genome-wide significant signals that have not been previously identified, including 9 low-frequency variants pointing to functional candidates. Of the 106 signals, 6 are in genomic regions that have not been implicated with related traits before, 28 are independent signals at previously reported regions, and 72 represent previously reported signals for a different anthropometric trait. 71% of signals reside within genes and fine mapping resolves 23 signals to one or two likely causal variants. We confirm genetic overlap between human monogenic and polygenic anthropometric traits and find signal enrichment in cis expression QTLs in relevant tissues. Our results highlight the potential of WGS strategies to enhance biologically relevant discoveries across the frequency spectrum.
  • Tagliapietra, L., & McQueen, J. M. (2010). What and where in speech recognition: Geminates and singletons in spoken Italian. Journal of Memory and Language, 63, 306-323. doi:10.1016/j.jml.2010.05.001.

    Abstract

    Four cross-modal repetition priming experiments examined whether consonant duration in Italian provides listeners with information not only for segmental identification ("what" information: whether the consonant is a geminate or a singleton) but also for lexical segmentation (“where” information: whether the consonant is in word-initial or word-medial position). Italian participants made visual lexical decisions to words containing geminates or singletons, preceded by spoken primes (whole words or fragments) containing either geminates or singletons. There were effects of segmental identity (geminates primed geminate recognition; singletons primed singleton recognition), and effects of consonant position (regression analyses revealed graded effects of geminate duration only for geminates which can vary in position, and mixed-effect modeling revealed a positional effect for singletons only in low-frequency words). Durational information appeared to be more important for segmental identification than for lexical segmentation. These findings nevertheless indicate that the same kind of information can serve both "what" and "where" functions in speech comprehension, and that the perceptual processes underlying those functions are interdependent.
  • Takashima, A., Wagensveld, B., Van Turennout, M., Zwitserlood, P., Hagoort, P., & Verhoeven, L. (2014). Training-induced neural plasticity in visual-word decoding and the role of syllables. Neuropsychologia, 61, 299-314. doi:10.1016/j.neuropsychologia.2014.06.017.

    Abstract

    To investigate the neural underpinnings of word decoding, and how it changes as a function of repeated exposure, we trained Dutch participants repeatedly over the course of a month of training to articulate a set of novel disyllabic input strings written in Greek script to avoid the use of familiar orthographic representations. The syllables in the input were phonotactically legal combinations but non-existent in the Dutch language, allowing us to assess their role in novel word decoding. Not only trained disyllabic pseudowords were tested but also pseudowords with recombined patterns of syllables to uncover the emergence of syllabic representations. We showed that with extensive training, articulation became faster and more accurate for the trained pseudowords. On the neural level, the initial stage of decoding was reflected by increased activity in visual attention areas of occipito-temporal and occipito-parietal cortices, and in motor coordination areas of the precentral gyrus and the inferior frontal gyrus. After one month of training, memory representations for holistic information (whole word unit) were established in areas encompassing the angular gyrus, the precuneus and the middle temporal gyrus. Syllabic representations also emerged through repeated training of disyllabic pseudowords, such that reading recombined syllables of the trained pseudowords showed similar brain activation to trained pseudowords and were articulated faster than novel combinations of letter strings used in the trained pseudowords.
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2017). Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words. Brain and Language, 167, 44-60. doi:10.1016/j.bandl.2016.05.009.

    Abstract

    When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2014). Richness of information about novel words influences how episodic and semantic memory networks interact during lexicalization. NeuroImage, 84, 265-278. doi:10.1016/j.neuroimage.2013.08.023.

    Abstract

    The complementary learning systems account of declarative memory suggests two distinct memory networks, a fast-mapping, episodic system involving the hippocampus, and a slower semantic memory system distributed across the neocortex in which new information is gradually integrated with existing representations. In this study, we investigated the extent to which these two networks are involved in the integration of novel words into the lexicon after extensive learning, and how the involvement of these networks changes after 24 hours. In particular, we explored whether having richer information at encoding influences the lexicalization trajectory. We trained participants with two sets of novel words, one where exposure was only to the words’ phonological forms (the form-only condition), and one where pictures of unfamiliar objects were associated with the words’ phonological forms (the picture-associated condition). A behavioral measure of lexical competition (indexing lexicalization) indicated stronger competition effects for the form-only words. Imaging (fMRI) results revealed greater involvement of phonological lexical processing areas immediately after training in the form-only condition, suggesting tight connections were formed between novel words and existing lexical entries already at encoding. Retrieval of picture-associated novel words involved the episodic/hippocampal memory system more extensively. Although lexicalization was weaker in the picture-associated condition, overall memory strength was greater when tested after a 24 hours’ delay, probably due to the availability of both episodic and lexical memory networks to aid retrieval. It appears that, during lexicalization of a novel word, the relative involvement of different memory networks differs according to the richness of the information about that word available at encoding.
  • Takaso, H., Eisner, F., Wise, R. J. S., & Scott, S. K. (2010). The effect of delayed auditory feedback on activity in the temporal lobe while speaking: A Positron Emission Tomography study. Journal of Speech, Language, and Hearing Research, 53, 226-236. doi:10.1044/1092-4388(2009/09-0009).

    Abstract

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many non-stuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission tomography (PET) was used to image regional cerebral blood flow changes, an index of neural activity, and assessed the influence of increasing amounts of delay. Results: Delayed auditory feedback led to increased activation in the bilateral superior temporal lobes, extending into posterior-medial auditory areas. Similar peaks in the temporal lobe were sensitive to increases in the amount of delay. A single peak in the temporal parietal junction responded to the amount of delay but not to the presence of a delay (relative to no delay). Conclusions: This study permitted distinctions to be made between the neural response to hearing one's voice at a delay, and the neural activity that correlates with this delay. Notably all the peaks showed some influence of the amount of delay. This result confirms a role for the posterior, sensori-motor ‘how’ system in the production of speech under conditions of delayed auditory feedback.
  • Tamaoka, K., Makioka, S., Sanders, S., & Verdonschot, R. G. (2017). www.kanjidatabase.com: A new interactive online database for psychological and linguistic research on Japanese kanji and their compound words. Psychological Research, 81(3), 696-708. doi:10.1007/s00426-016-0764-3.

    Abstract

    Most experimental research making use of the Japanese language has involved the 1945 officially standardized kanji (Japanese logographic characters) in the Joyo kanji list (originally announced by the Japanese government in 1981). However, this list was extensively modified in 2010: five kanji were removed and 196 kanji were added; the latest revision of the list now has a total of 2136 kanji. Using an up-to-date corpus consisting of 11 years' worth of articles printed in the Mainichi Newspaper (2000-2010), we have constructed two novel databases that can be used in psychological research using the Japanese language: (1) a database containing a wide variety of properties on the latest 2136 Joyo kanji, and (2) a novel database containing 27,950 two-kanji compound words (or jukugo). Based on these two databases, we have created an interactive website (www.kanjidatabase.com) to retrieve and store linguistic information to be used in psychological and linguistic experiments. The present paper reports the most important characteristics for the new databases, as well as their value for experimental psychological and linguistic research.
  • Tamaoka, K., Saito, N., Kiyama, S., Timmer, K., & Verdonschot, R. G. (2014). Is pitch accent necessary for comprehension by native Japanese speakers? - An ERP investigation. Journal of Neurolinguistics, 27(1), 31-40. doi:10.1016/j.jneuroling.2013.08.001.

    Abstract

    Not unlike the tonal system in Chinese, Japanese habitually attaches pitch accents to the production of words. However, in contrast to Chinese, few homophonic word-pairs are really distinguished by pitch accents (Shibata & Shibata, 1990). This predicts that pitch accent plays a small role in lexical selection for Japanese language comprehension. The present study investigated whether native Japanese speakers necessarily use pitch accent in the processing of accent-contrasted homophonic pairs (e.g., ame [LH] for 'candy' and ame [HI] for 'rain') measuring electroencephalographic (EEG) potentials. Electrophysiological evidence (i.e., N400) was obtained when a word was semantically incorrect for a given context but not for incorrectly accented homophones. This suggests that pitch accent indeed plays a minor role when understanding Japanese. (C) 2013 Elsevier Ltd. All rights reserved.
  • Tan, Y., Martin, R. C., & Van Dyke, J. A. (2017). Semantic and syntactic interference in sentence comprehension: A comparison of working memory models. Frontiers in Psychology, 8: 198. doi:10.3389/fpsyg.2017.00198.

    Abstract

    This study investigated the nature of the underlying working memory system supporting sentence processing through examining individual differences in sensitivity to retrieval interference effects during sentence comprehension. Interference effects occur when readers incorrectly retrieve sentence constituents which are similar to those required during integrative processes. We examined interference arising from a partial match between distracting constituents and syntactic and semantic cues, and related these interference effects to performance on working memory, short-term memory (STM), vocabulary, and executive function tasks. For online sentence comprehension, as measured by self-paced reading, the magnitude of individuals' syntactic interference effects was predicted by general WM capacity and the relation remained significant when partialling out vocabulary, indicating that the effects were not due to verbal knowledge. For offline sentence comprehension, as measured by responses to comprehension questions, both general WM capacity and vocabulary knowledge interacted with semantic interference for comprehension accuracy, suggesting that both general WM capacity and the quality of semantic representations played a role in determining how well interference was resolved offline. For comprehension question reaction times, a measure of semantic STM capacity interacted with semantic but not syntactic interference. However, a measure of phonological capacity (digit span) and a general measure of resistance to response interference (Stroop effect) did not predict individuals' interference resolution abilities in either online or offline sentence comprehension. The results are discussed in relation to the multiple capacities account of working memory (e.g., Martin and Romani, 1994; Martin and He, 2004), and the cue-based retrieval parsing approach (e.g., Lewis et al., 2006; Van Dyke et al., 2014). While neither approach was fully supported, a possible means of reconciling the two approaches and directions for future research are proposed.
  • Tanner, J. E., & Perlman, M. (2017). Moving beyond ‘meaning’: Gorillas combine gestures into sequences for creative display. Language & Communication, 54, 56-72. doi:10.1016/j.langcom.2016.10.006.

    Abstract

    The great apes produce gestures intentionally and flexibly, and sometimes they combine their gestures into sequences, producing two or more gestures in close succession. We reevaluate previous findings related to ape gesture sequences and present qualitative analysis of videotaped gorilla interaction. We present evidence that gorillas produce at least two different kinds of gesture sequences: some sequences are largely composed of gestures that depict motion in an iconic manner, typically requesting particular action by the partner; others are multimodal and contain gestures – often percussive in nature – that are performed in situations of play or display. Display sequences seem to primarily exhibit the performer’s emotional state and physical fitness but have no immediate functional goal. Analysis reveals that some gorilla play and display sequences can be 1) organized hierarchically into longer bouts and repetitions; 2) innovative and individualized, incorporating objects and environmental features; and 3) highly interactive between partners. It is illuminating to look beyond ‘meaning’ in the conventional linguistic sense and look at the possibility that characteristics of music and dance, as well as those of language, are included in the gesturing of apes.
  • Tanner, D., Nicol, J., & Brehm, L. (2014). The time-course of feature interference in agreement comprehension: Multiple mechanisms and asymmetrical attraction. Journal of Memory and Language, 76, 195-215. doi:10.1016/j.jml.2014.07.003.

    Abstract

    Attraction interference in language comprehension and production may be as a result of common or different processes. In the present paper, we investigate attraction interference during language comprehension, focusing on the contexts in which interference arises and the time-course of these effects. Using evidence from event-related brain potentials (ERPs) and sentence judgment times, we show that agreement attraction in comprehension is best explained as morphosyntactic interference during memory retrieval. This stands in contrast to attraction as a message-level process involving the representation of the subject NP's number features, which is a strong contributor to attraction in production. We thus argue that the cognitive antecedents of agreement attraction in comprehension are non-identical with those of attraction in production, and moreover, that attraction in comprehension is primarily a consequence of similarity-based interference in cue-based memory retrieval processes. We suggest that mechanisms responsible for attraction during language comprehension are a subset of those involved in language production.
  • Telling, A. L., Kumar, S., Meyer, A. S., & Humphreys, G. W. (2010). Electrophysiological evidence of semantic interference in visual search. Journal of Cognitive Neuroscience, 22(10), 2212-2225. doi:10.1162/jocn.2009.21348.

    Abstract

    Visual evoked responses were monitored while participants searched for a target (e.g., bird) in a four-object display that could include a semantically related distractor (e.g., fish). The occurrence of both the target and the semantically related distractor modulated the N2pc response to the search display: The N2pc amplitude was more pronounced when the target and the distractor appeared in the same visual field, and it was less pronounced when the target and the distractor were in opposite fields, relative to when the distractor was absent. Earlier components (P1, N1) did not show any differences in activity across the different distractor conditions. The data suggest that semantic distractors influence early stages of selecting stimuli in multielement displays.
  • Telling, A. L., Meyer, A. S., & Humphreys, G. W. (2010). Distracted by relatives: Effects of frontal lobe damage on semantic distraction. Brain and Cognition, 73, 203-214. doi:10.1016/j.bandc.2010.05.004.

    Abstract

    When young adults carry out visual search, distractors that are semantically related, rather than unrelated, to targets can disrupt target selection (see [Belke et al., 2008] and [Moores et al., 2003]). This effect is apparent on the first eye movements in search, suggesting that attention is sometimes captured by related distractors. Here we assessed effects of semantically related distractors on search in patients with frontal-lobe lesions and compared them to the effects in age-matched controls. Compared with the controls, the patients were less likely to make a first saccade to the target and they were more likely to saccade to distractors (whether related or unrelated to the target). This suggests a deficit in a first stage of selecting a potential target for attention. In addition, the patients made more errors by responding to semantically related distractors on target-absent trials. This indicates a problem at a second stage of target verification, after items have been attended. The data suggest that frontal lobe damage disrupts both the ability to use peripheral information to guide attention, and the ability to keep separate the target of search from the related items, on occasions when related items achieve selection.
  • Ten Oever, S., Schroeder, C. E., Poeppel, D., Van Atteveldt, N., Mehta, A. D., Megevand, P., Groppe, D. M., & Zion-Golumbic, E. (2017). Low-frequency cortical oscillations entrain to subthreshold rhythmic auditory stimuli. The Journal of Neuroscience, 37(19), 4903-4912. doi:10.1523/JNEUROSCI.3658-16.2017.

    Abstract

    Many environmental stimuli contain temporal regularities, a feature that can help predict forthcoming input. Phase locking (entrainment) of ongoing low-frequency neuronal oscillations to rhythmic stimuli is proposed as a potential mechanism for enhancing neuronal responses and perceptual sensitivity, by aligning high-excitability phases to events within a stimulus stream. Previous experiments show that rhythmic structure has a behavioral benefit even when the rhythm itself is below perceptual detection thresholds (ten Oever et al., 2014). It is not known whether this "inaudible" rhythmic sound stream also induces entrainment. Here we tested this hypothesis using magnetoencephalography and electrocorticography in humans to record changes in neuronal activity as subthreshold rhythmic stimuli gradually became audible. We found that significant phase locking to the rhythmic sounds preceded participants' detection of them. Moreover, no significant auditory-evoked responses accompanied this prethreshold entrainment. These auditory-evoked responses, distinguished by robust, broad-band increases in intertrial coherence, only appeared after sounds were reported as audible. Taken together with the reduced perceptual thresholds observed for rhythmic sequences, these findings support the proposition that entrainment of low-frequency oscillations serves a mechanistic role in enhancing perceptual sensitivity for temporally predictive sounds. This framework has broad implications for understanding the neural mechanisms involved in generating temporal predictions and their relevance for perception, attention, and awareness.
  • Ten Oever, S., Schroeder, C. E., Poeppel, D., Van Atteveldt, N., & Zion-Golumbic, E. (2014). Rhythmicity and cross-modal temporal cues facilitate detection. Neuropsychologia, 63, 43-50. doi:10.1016/j.neuropsychologia.2014.08.008.

    Abstract

    Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we investigated the influence of two types of predictive temporal information on the perception of near-threshold auditory stimuli: 1) intrinsic temporal rhythmicity within an auditory stimulus stream and 2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality should decrease the threshold at which sounds are detected, beyond the advantage provided by each information source alone. Two experiments were conducted in which participants had to detect tones in noise. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual signal in half of the trials. We show that detection intensities are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, independent from response bias, and that this effect is even greater for rhythmic audiovisual presentation. These results suggest that both types of temporal information are used to optimally process sounds that occur at expected points in time (resulting in enhanced detection), and that multiple temporal cues are combined to improve temporal estimates. Our findings underscore the flexibility and proactivity of the perceptual system which uses within- and across-modality temporal cues to anticipate upcoming events and process them optimally. (C) 2014 Elsevier Ltd. All rights reserved.
  • Terrill, A. (2010). [Review of Bowern, Claire. 2008. Linguistic fieldwork: a practical guide]. Language, 86(2), 435-438. doi:10.1353/lan.0.0214.
  • Terrill, A. (2010). [Review of R. A. Blust The Austronesian languages. 2009. Canberra: Pacific Linguistics]. Oceanic Linguistics, 49(1), 313-316. doi:10.1353/ol.0.0061.

    Abstract

    In lieu of an abstract, here is a preview of the article. This is a marvelous, dense, scholarly, detailed, exhaustive, and ambitious book. In 800-odd pages, it seeks to describe the whole huge majesty of the Austronesian language family, as well as the history of the family, the history of ideas relating to the family, and all the ramifications of such topics. Blust doesn't just describe, he goes into exhaustive detail, and not just over a few topics, but over every topic he covers. This is an incredible achievement, representing a lifetime of experience. This is not a book to be read from cover to cover—it is a book to be dipped into, pondered, and considered, slowly and carefully. The book is not organized by area or subfamily; readers interested in one area or family can consult the authoritative work on Western Austronesian (Adelaar and Himmelmann 2005), or, for the Oceanic languages, Lynch, Ross, and Crowley (2002). Rather, Blust's stated aim "is to provide a comprehensive overview of Austronesian languages which integrates areal interests into a broader perspective" (xxiii). Thus the aim is more ambitious than just discussion of areal features or historical connections, but seeks to describe the interconnections between these. The Austronesian language family is very large, second only in size to Niger-Congo (xxii). It encompasses over 1,000 members, and its protolanguage has been dated back to 6,000 years ago (xxii). The exact groupings of some Austronesian languages are still under discussion, but broadly, the family is divided into ten major subgroups, nine of which are spoken in Taiwan, the homeland of the Austronesian family. The tenth, Malayo-Polynesian, is itself divided into two major groups: Western Malayo-Polynesian, which is spread throughout the Philippines, Indonesia, and mainland Southeast Asia to Madagascar; and Central-Eastern Malayo-Polynesian, spoken from eastern Indonesia throughout the Pacific. The geographic, cultural, and linguistic diversity of the family
  • Terrill, A. (2011). Languages in contact: An exploration of stability and change in the Solomon Islands. Oceanic Linguistics, 50(2), 312-337.

    Abstract

    The Papuan-Oceanic world has long been considered a hotbed of contact-induced linguistic change, and there have been a number of studies of deep linguistic influence between Papuan and Oceanic languages (like those by Thurston and Ross). This paper assesses the degree and type of contact-induced language change in the Solomon Islands, between the four Papuan languages—Bilua (spoken on Vella Lavella, Western Province), Touo (spoken on southern Rendova, Western Province), Savosavo (spoken on Savo Island, Central Province), and Lavukaleve (spoken in the Russell Islands, Central Province)—and their Oceanic neighbors. First, a claim is made for a degree of cultural homogeneity for Papuan and Oceanic-speaking populations within the Solomons. Second, lexical and grammatical borrowing are considered in turn, in an attempt to identify which elements in each of the four Papuan languages may have an origin in Oceanic languages—and indeed which elements in Oceanic languages may have their origin in Papuan languages. Finally, an assessment is made of the degrees of stability versus change in the Papuan and Oceanic languages of the Solomon Islands.
  • Terwisscha van Scheltinga, A. F., Bakker, S. C., Van Haren, N. E., Boos, H. B., Schnack, H. G., Cahn, W., Hoogman, M., Zwiers, M. P., Fernandez, G., Franke, B., Hulshoff Pol, H. E., & Kahn, R. S. (2014). Association study of fibroblast growth factor genes and brain volumes in schizophrenic patients and healthy controls. Psychiatric Genetics, 24, 283-284. doi:10.1097/YPG.0000000000000057.
  • Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Teunisse, J.-P., & Hagoort, P. (2011). Neural correlates of language comprehension in autism spectrum disorders: When language conflicts with world knowledge. Neuropsychologia, 49, 1095-1104. doi:10.1016/j.neuropsychologia.2011.01.018.

    Abstract

    In individuals with ASD, difficulties with language comprehension are most evident when higher-level semantic-pragmatic language processing is required, for instance when context has to be used to interpret the meaning of an utterance. Until now, it is unclear at what level of processing and for what type of context these difficulties in language comprehension occur. Therefore, in the current fMRI study, we investigated the neural correlates of the integration of contextual information during auditory language comprehension in 24 adults with ASD and 24 matched control participants. Different levels of context processing were manipulated by using spoken sentences that were correct or contained either a semantic or world knowledge anomaly. Our findings demonstrated significant differences between the groups in inferior frontal cortex that were only present for sentences with a world knowledge anomaly. Relative to the ASD group, the control group showed significantly increased activation in left inferior frontal gyrus (LIFG) for sentences with a world knowledge anomaly compared to correct sentences. This effect possibly indicates reduced integrative capacities of the ASD group. Furthermore, world knowledge anomalies elicited significantly stronger activation in right inferior frontal gyrus (RIFG) in the control group compared to the ASD group. This additional RIFG activation probably reflects revision of the situation model after new, conflicting information. The lack of recruitment of RIFG is possibly related to difficulties with exception handling in the ASD group.

    Files private

    Request files
  • Theakston, A., Coates, A., & Holler, J. (2014). Handling agents and patients: Representational cospeech gestures help children comprehend complex syntactic constructions. Developmental Psychology, 50(7), 1973-1984. doi:10.1037/a0036694.

    Abstract

    Gesture is an important precursor of children’s early language development, for example, in the transition to multiword speech and as a predictor of later language abilities. However, it is unclear whether gestural input can influence children’s comprehension of complex grammatical constructions. In Study 1, 3- (M = 3 years 5 months) and 4-year-old (M = 4 years 6 months) children witnessed 2-participant actions described using the infrequent object-cleft-construction (OCC; It was the dog that the cat chased). Half saw an experimenter accompanying her descriptions with gestures representing the 2 participants and indicating the direction of action; the remaining children did not witness gesture. Children who witnessed gestures showed better comprehension of the OCC than those who did not witness gestures, both in and beyond the immediate physical context, but this benefit was restricted to the oldest 4-year-olds. In Study 2, a further group of older 4-year-old children (M = 4 years 7 months) witnessed the same 2-participant actions described by an experimenter and accompanied by gestures, but the gesture represented only the 2 participants and not the direction of the action. Again, a benefit of gesture was observed on subsequent comprehension of the OCC. We interpret these findings as demonstrating that representational cospeech gestures can help children comprehend complex linguistic structures by highlighting the roles played by the participants in the event.

    Files private

    Request files
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2004). Semantic generality, input frequency and the acquisition of syntax. Journal of Child Language, 31(1), 61-99. doi:10.1017/S0305000903005956.

    Abstract

    In many areas of language acquisition, researchers have suggested that semantic generality plays an important role in determining the order of acquisition of particular lexical forms. However, generality is typically confounded with the effects of input frequency and it is therefore unclear to what extent semantic generality or input frequency determines the early acquisition of particular lexical items. The present study evaluates the relative influence of semantic status and properties of the input on the acquisition of verbs and their argument structures in the early speech of 9 English-speaking children from 2;0 to 3;0. The children's early verb utterances are examined with respect to (1) the order of acquisition of particular verbs in three different constructions, (2) the syntactic diversity of use of individual verbs, (3) the relative proportional use of semantically general verbs as a function of total verb use, and (4) their grammatical accuracy. The data suggest that although measures of semantic generality correlate with various measures of early verb use, once the effects of verb use in the input are removed, semantic generality is not a significant predictor of early verb use. The implications of these results for semantic-based theories of verb argument structure acquisition are discussed.
  • Thiebaut de Schotten, M., Dell'Acqua, F., Forkel, S. J., Simmons, A., Vergani, F., Murphy, D. G. M., & Catani, M. (2011). A lateralized brain network for visuospatial attention. Nature Neuroscience, 14, 1245-1246. doi:10.1038/nn.2905.

    Abstract

    Right hemisphere dominance for visuospatial attention is characteristic of most humans, but its anatomical basis remains unknown. We report the first evidence in humans for a larger parieto-frontal network in the right than left hemisphere, and a significant correlation between the degree of anatomical lateralization and asymmetry of performance on visuospatial tasks. Our results suggest that hemispheric specialization is associated with an unbalanced speed of visuospatial processing.

    Additional information

    supplementary material

Share this page