Publications

Displaying 1 - 100 of 1428
  • Abbott, M. J., Angele, B., Ahn, D., & Rayner, K. (2015). Skipping syntactically illegal the previews: The role of predictability. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(6), 1703-1714. doi:10.1037/xlm0000142.

    Abstract

    Readers tend to skip words, particularly when they are short, frequent, or predictable. Angele and Rayner (2013) recently reported that readers are often unable to detect syntactic anomalies in parafoveal vision. In the present study, we manipulated target word predictability to assess whether contextual constraint modulates the-skipping behavior. The results provide further evidence that readers frequently skip the article the when infelicitous in context. Readers skipped predictable words more often than unpredictable words, even when the, which was syntactically illegal and unpredictable from the prior context, was presented as a parafoveal preview. The results of the experiment were simulated using E-Z Reader 10 by assuming that cloze probability can be dissociated from parafoveal visual input. It appears that when a short word is predictable in context, a decision to skip it can be made even if the information available parafoveally conflicts both visually and syntactically with those predictions.
  • Abma, R., Breeuwsma, G., & Poletiek, F. H. (2001). Toetsen in het onderwijs. De Psycholoog, 36, 638-639.
  • Acheson, D. J. (2013). Signatures of response conflict monitoring in language production. Procedia - Social and Behavioral Sciences, 94, 214-215. doi:10.1016/j.sbspro.2013.09.106.
  • Acheson, D. J., & Hagoort, P. (2013). Stimulating the brain's language network: Syntactic ambiguity resolution after TMS to the IFG and MTG. Journal of Cognitive Neuroscience, 25(10), 1664-1677. doi:10.1162/jocn_a_00430.

    Abstract

    The posterior middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) are two critical nodes of the brain's language network. Previous neuroimaging evidence has supported a dissociation in language comprehension in which parts of the MTG are involved in the retrieval of lexical syntactic information and the IFG is involved in unification operations that maintain, select, and integrate multiple sources of information over time. In the present investigation, we tested for causal evidence of this dissociation by modulating activity in IFG and MTG using an offline TMS procedure: continuous theta-burst stimulation. Lexical–syntactic retrieval was manipulated by using sentences with and without a temporarily word-class (noun/verb) ambiguity (e.g., run). In one group of participants, TMS was applied to the IFG and MTG, and in a control group, no TMS was applied. Eye movements were recorded and quantified at two critical sentence regions: a temporarily ambiguous region and a disambiguating region. Results show that stimulation of the IFG led to a modulation of the ambiguity effect (ambiguous–unambiguous) at the disambiguating sentence region in three measures: first fixation durations, total reading times, and regressive eye movements into the region. Both IFG and MTG stimulation modulated the ambiguity effect for total reading times in the temporarily ambiguous sentence region relative to a control group. The current results demonstrate that an offline repetitive TMS protocol can have influences at a different point in time during online processing and provide causal evidence for IFG involvement in unification operations during sentence comprehension.
  • Acheson, D. J., & Hagoort, P. (2014). Twisting tongues to test for conflict monitoring in speech production. Frontiers in Human Neuroscience, 8: 206. doi:10.3389/fnhum.2014.00206.

    Abstract

    A number of recent studies have hypothesized that monitoring in speech production may occur via domain-general mechanisms responsible for the detection of response conflict. Outside of language, two ERP components have consistently been elicited in conflict-inducing tasks (e.g., the flanker task): the stimulus-locked N2 on correct trials, and the response-locked error-related negativity (ERN). The present investigation used these electrophysiological markers to test whether a common response conflict monitor is responsible for monitoring in speech and non-speech tasks. Electroencephalography (EEG) was recorded while participants performed a tongue twister (TT) task and a manual version of the flanker task. In the TT task, people rapidly read sequences of four nonwords arranged in TT and non-TT patterns three times. In the flanker task, people responded with a left/right button press to a center-facing arrow, and conflict was manipulated by the congruency of the flanking arrows. Behavioral results showed typical effects of both tasks, with increased error rates and slower speech onset times for TT relative to non-TT trials and for incongruent relative to congruent flanker trials. In the flanker task, stimulus-locked EEG analyses replicated previous results, with a larger N2 for incongruent relative to congruent trials, and a response-locked ERN. In the TT task, stimulus-locked analyses revealed broad, frontally-distributed differences beginning around 50 ms and lasting until just before speech initiation, with TT trials more negative than non-TT trials; response-locked analyses revealed an ERN. Correlation across these measures showed some correlations within a task, but little evidence of systematic cross-task correlation. Although the present results do not speak against conflict signals from the production system serving as cues to self-monitoring, they are not consistent with signatures of response conflict being mediated by a single, domain-general conflict monitor
  • Agus, T., Carrion Castillo, A., Pressnitzer, D., & Ramus, F. (2014). Perceptual learning of acoustic noise by individuals with dyslexia. Journal of Speech, Language, and Hearing Research., 57, 1069-1077. doi:10.1044/1092-4388(2013/13-0020).

    Abstract

    Purpose: A phonological deficit is thought to affect most individuals with developmental dyslexia. The present study addresses whether the phonological deficit is caused by difficulties with perceptual learning of fine acoustic details. Method: A demanding test of nonverbal auditory memory, “noise learning,” was administered to both adults with dyslexia and control adult participants. On each trial, listeners had to decide whether a stimulus was a 1-s noise token or 2 abutting presentations of the same 0.5-s noise token (repeated noise). Without the listener’s knowledge, the exact same noise tokens were presented over many trials. An improved ability to perform the task for such “reference” noises reflects learning of their acoustic details. Results: Listeners with dyslexia did not differ from controls in any aspect of the task, qualitatively or quantitatively. They required the same amount of training to achieve discrimination of repeated from nonrepeated noises, and they learned the reference noises as often and as rapidly as the control group. However, they did show all the hallmarks of dyslexia, including a well-characterized phonological deficit. Conclusion: The data did not support the hypothesis that deficits in basic auditory processing or nonverbal learning and memory are the cause of the phonological deficit in dyslexia
  • Ahlsson, F., Åkerud, H., Schijven, D., Olivier, J., & Sundström-Poromaa, I. (2015). Gene expression in placentas from nondiabetic women giving birth to large for gestational age infants. Reproductive Sciences, 22(10), 1281-1288. doi:10.1177/1933719115578928.

    Abstract

    Gestational diabetes, obesity, and excessive weight gain are known independent risk factors for the birth of a large for gestational age (LGA) infant. However, only 1 of the 10 infants born LGA is born by mothers with diabetes or obesity. Thus, the aim of the present study was to compare placental gene expression between healthy, nondiabetic mothers (n = 22) giving birth to LGA infants and body mass index-matched mothers (n = 24) giving birth to appropriate for gestational age infants. In the whole gene expression analysis, only 29 genes were found to be differently expressed in LGA placentas. Top upregulated genes included insulin-like growth factor binding protein 1, aminolevulinate δ synthase 2, and prolactin, whereas top downregulated genes comprised leptin, gametocyte-specific factor 1, and collagen type XVII α 1. Two enriched gene networks were identified, namely, (1) lipid metabolism, small molecule biochemistry, and organismal development and (2) cellular development, cellular growth, proliferation, and tumor morphology.
  • Alday, P. M. (2015). Be Careful When Assuming the Obvious: Commentary on “The Placement of the Head that Minimizes Online Memory: A Complex Systems Approach”. Language Dynamics and Change, 5(1), 138-146. doi:10.1163/22105832-00501008.

    Abstract

    Ferrer-i-Cancho (this volume) presents a mathematical model of both the synchronic and diachronic nature of word order based on the assumption that memory costs are a never decreasing function of distance and a few very general linguistic assumptions. However, even these minimal and seemingly obvious assumptions are not as safe as they appear in light of recent typological and psycholinguistic evidence. The interaction of word order and memory has further depths to be explored.
  • Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2015). Discovering prominence and its role in language processing: An individual (differences) approach. Linguistics Vanguard, 1(1), 201-213. doi:10.1515/lingvan-2014-1013.

    Abstract

    It has been suggested that, during real time language comprehension, the human language processing system attempts to identify the argument primarily responsible for the state of affairs (the “actor”) as quickly and unambiguously as possible. However, previous work on a prominence (e.g. animacy, definiteness, case marking) based heuristic for actor identification has suffered from underspecification of the relationship between different cue hierarchies. Qualitative work has yielded a partial ordering of many features (e.g. MacWhinney et al. 1984), but a precise quantification has remained elusive due to difficulties in exploring the full feature space in a particular language. Feature pairs tend to correlate strongly in individual languages for semantic-pragmatic reasons (e.g., animate arguments tend to be actors and actors tend to be morphosyntactically privileged), and it is thus difficult to create acceptable stimuli for a fully factorial design even for binary features. Moreover, the exponential function grows extremely rapidly and a fully crossed factorial design covering the entire feature space would be prohibitively long for a purely within-subjects design. Here, we demonstrate the feasibility of parameter estimation in a short experiment. We are able to estimate parameters at a single subject level for the parameters animacy, case and number. This opens the door for research into individual differences and population variation. Moreover, the framework we introduce here can be used in the field to measure more “exotic” languages and populations, even with small sample sizes. Finally, pooled single-subject results are used to reduce the number of free parameters in previous work based on the extended Argument Dependency Model (Bornkessel-Schlesewsky and Schlesewsky 2006, 2009, 2013, in press; Alday et al. 2014).
  • Alday, P. M. (2015). Quantity and Quality:Not a Zero-Sum Game: A computational and neurocognitive examination of human language processing. PhD Thesis, Philipps-Universität Marburg, Marburg.
  • Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2014). Towards a Computational Model of Actor-Based Language Comprehension. Neuroinformatics, 12(1), 143-179. doi:10.1007/s12021-013-9198-x.

    Abstract

    Neurophysiological data from a range of typologically diverse languages provide evidence for a cross-linguistically valid, actor-based strategy of understanding sentence-level meaning. This strategy seeks to identify the participant primarily responsible for the state of affairs (the actor) as quickly and unambiguously as possible, thus resulting in competition for the actor role when there are multiple candidates. Due to its applicability across languages with vastly different characteristics, we have proposed that the actor strategy may derive from more basic cognitive or neurobiological organizational principles, though it is also shaped by distributional properties of the linguistic input (e.g. the morphosyntactic coding strategies for actors in a given language). Here, we describe an initial computational model of the actor strategy and how it interacts with language-specific properties. Specifically, we contrast two distance metrics derived from the output of the computational model (one weighted and one unweighted) as potential measures of the degree of competition for actorhood by testing how well they predict modulations of electrophysiological activity engendered by language processing. To this end, we present an EEG study on word order processing in German and use linear mixed-effects models to assess the effect of the various distance metrics. Our results show that a weighted metric, which takes into account the weighting of an actor-identifying feature in the language under consideration outperforms an unweighted distance measure. We conclude that actor competition effects cannot be reduced to feature overlap between multiple sentence participants and thereby to the notion of similarity-based interference, which is prominent in current memory-based models of language processing. Finally, we argue that, in addition to illuminating the underlying neurocognitive mechanisms of actor competition, the present model can form the basis for a more comprehensive, neurobiologically plausible computational model of constructing sentence-level meaning.
  • Alferink, I. (2015). Dimensions of convergence in bilingual speech and gesture. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Alferink, I., & Gullberg, M. (2014). French-Dutch bilinguals do not maintain obligatory semantic distinctions: Evidence from placement verbs. Bilingualism: Language and Cognition, 17, 22-37. doi:10.1017/S136672891300028X.

    Abstract

    It is often said that bilinguals are not the sum of two monolinguals but that bilingual systems represent a third pattern. This study explores the exact nature of this pattern. We ask whether there is evidence of a merged system when one language makes an obligatory distinction that the other one does not, namely in the case of placement verbs in French and Dutch, and whether such a merged system is realised as a more general or a more specific system. The results show that in elicited descriptions Belgian French-Dutch bilinguals drop one of the categories in one of the languages, resulting in a more general semantic system in comparison with the non-contact variety. They do not uphold the obligatory distinction in the verb nor elsewhere despite its communicative relevance. This raises important questions regarding how widespread these differences are and what drives these patterns
  • Alhama, R. G., Scha, R., & Zudema, W. (2015). How should we evaluate models of segmentation in artificial language learning? In N. A. Taatgen, M. K. van Vugt, J. P. Borst, & K. Mehlhorn (Eds.), Proceedings of ICCM 2015 (pp. 172-173). Groningen: University of Groningen.

    Abstract

    One of the challenges that infants have to solve when learn- ing their native language is to identify the words in a con- tinuous speech stream. Some of the experiments in Artificial Grammar Learning (Saffran, Newport, and Aslin (1996); Saf- fran, Aslin, and Newport (1996); Aslin, Saffran, and Newport (1998) and many more) investigate this ability. In these ex- periments, subjects are exposed to an artificial speech stream that contains certain regularities. Adult participants are typ- ically tested with 2-alternative Forced Choice Tests (2AFC) in which they have to choose between a word and another sequence (typically a partword, a sequence resulting from misplacing boundaries).
  • Alhama, R. G., Scha, R., & Zuidema, W. (2014). Rule learning in humans and animals. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Cornish (Eds.), The evolution of language: Proceedings of the 10th International Conference (EVOLANG 10) (pp. 371-372). Singapore: World Scientific.
  • Alibali, M. W., Kita, S., Bigelow, L. J., Wolfman, C. M., & Klein, S. M. (2001). Gesture plays a role in thinking for speaking. In C. Cavé, I. Guaïtella, & S. Santi (Eds.), Oralité et gestualité: Interactions et comportements multimodaux dans la communication. Actes du colloque ORAGE 2001 (pp. 407-410). Paris, France: Éditions L'Harmattan.
  • Ambridge, B., Kidd, E., Rowland, C. F., & Theakston, A. L. (2015). Authors' response [The ubiquity of frequency effects in first language acquisition]. Journal of Child Language, 42(2), 316-322. doi:10.1017/S0305000914000841.

    Abstract

    Our target paper argued for the ubiquity of frequency effects in acquisition, and that any comprehensive theory must take into account the multiplicity of ways that frequently occurring and co-occurring linguistic units affect the acquisition process. The commentaries on the paper provide a largely unanimous endorsement of this position, but raise additional issues likely to frame further discussion and theoretical development. Specifically, while most commentators did not deny the importance of frequency effects, all saw this as the tip of the theoretical iceberg. In this short response we discuss common themes raised in the commentaries, focusing on the broader issue of what frequency effects mean for language acquisition.

    Additional information

    Target paper
  • Ambridge, B., Pine, J. M., Rowland, C. F., Freudenthal, D., & Chang, F. (2014). Avoiding dative overgeneralisation errors: semantics, statistics or both? Language, Cognition and Neuroscience, 29(2), 218-243. doi:10.1080/01690965.2012.738300.

    Abstract

    How do children eventually come to avoid the production of overgeneralisation errors, in particular, those involving the dative (e.g., *I said her “no”)? The present study addressed this question by obtaining from adults and children (5–6, 9–10 years) judgements of well-formed and over-general datives with 301 different verbs (44 for children). A significant effect of pre-emption—whereby the use of a verb in the prepositional-object (PO)-dative construction constitutes evidence that double-object (DO)-dative uses are not permitted—was observed for every age group. A significant effect of entrenchment—whereby the use of a verb in any construction constitutes evidence that unattested dative uses are not permitted—was also observed for every age group, with both predictors also accounting for developmental change between ages 5–6 and 9–10 years. Adults demonstrated knowledge of a morphophonological constraint that prohibits Latinate verbs from appearing in the DO-dative construction (e.g., *I suggested her the trip). Verbs’ semantic properties (supplied by independent adult raters) explained additional variance for all groups and developmentally, with the relative influence of narrow- vs broad-range semantic properties increasing with age. We conclude by outlining an account of the formation and restriction of argument-structure generalisations designed to accommodate these findings.
  • Ambridge, B., & Rowland, C. F. (2013). Experimental methods in studying child language acquisition. Wiley Interdisciplinary Reviews: Cognitive Science, 4(2), 149-168. doi:10.1002/wcs.1215.

    Abstract

    This article reviews the some of the most widely used methods used for studying children's language acquisition including (1) spontaneous/naturalistic, diary, parental report data, (2) production methods (elicited production, repetition/elicited imitation, syntactic priming/weird word order), (3) comprehension methods (act-out, pointing, intermodal preferential looking, looking while listening, conditioned head turn preference procedure, functional neuroimaging) and (4) judgment methods (grammaticality/acceptability judgments, yes-no/truth-value judgments). The review outlines the types of studies and age-groups to which each method is most suited, as well as the advantage and disadvantages of each. We conclude by summarising the particular methodological considerations that apply to each paradigm and to experimental design more generally. These include (1) choosing an age-appropriate task that makes communicative sense (2) motivating children to co-operate, (3) choosing a between-/within-subjects design, (4) the use of novel items (e.g., novel verbs), (5) fillers, (6) blocked, counterbalanced and random presentation, (7) the appropriate number of trials and participants, (8) drop-out rates (9) the importance of control conditions, (10) choosing a sensitive dependent measure (11) classification of responses, and (12) using an appropriate statistical test. WIREs Cogn Sci 2013, 4:149–168. doi: 10.1002/wcs.1215
  • Ambridge, B., Bidgood, A., Twomey, K. E., Pine, J. M., Rowland, C. F., & Freudenthal, D. (2015). Preemption versus Entrenchment: Towards a Construction-General Solution to the Problem of the Retreat from Verb Argument Structure Overgeneralization. PLoS One, 10(4): e0123723. doi:10.1371/journal.pone.0123723.

    Abstract

    Participants aged 5;2-6;8, 9;2-10;6 and 18;1-22;2 (72 at each age) rated verb argument structure overgeneralization errors (e.g., *Daddy giggled the baby) using a five-point scale. The study was designed to investigate the feasibility of two proposed construction-general solutions to the question of how children retreat from, or avoid, such errors. No support was found for the prediction of the preemption hypothesis that the greater the frequency of the verb in the single most nearly synonymous construction (for this example, the periphrastic causative; e.g., Daddy made the baby giggle), the lower the acceptability of the error. Support was found, however, for the prediction of the entrenchment hypothesis that the greater the overall frequency of the verb, regardless of construction, the lower the acceptability of the error, at least for the two older groups. Thus while entrenchment appears to be a robust solution to the problem of the retreat from error, and one that generalizes across different error types, we did not find evidence that this is the case for preemption. The implication is that the solution to the retreat from error lies not with specialized mechanisms, but rather in a probabilistic process of construction competition.
  • Ambridge, B., Pine, J. M., Rowland, C. F., Chang, F., & Bidgood, A. (2013). The retreat from overgeneralization in child language acquisition: Word learning, morphology, and verb argument structure. Wiley Interdisciplinary Reviews: Cognitive Science, 4(1), 47-62. doi:10.1002/wcs.1207.

    Abstract

    This review investigates empirical evidence for different theoretical proposals regarding the retreat from overgeneralization errors in three domains: word learning (e.g., *doggie to refer to all animals), morphology [e.g., *spyer, *cooker (one who spies/cooks), *unhate, *unsqueeze, *sitted; *drawed], and verb argument structure [e.g., *Don't giggle me (c.f. Don't make me giggle); *Don't say me that (c.f. Don't say that to me)]. The evidence reviewed provides support for three proposals. First, in support of the pre-emption hypothesis, the acquisition of competing forms that express the desired meaning (e.g., spy for *spyer, sat for *sitted, and Don't make me giggle for *Don't giggle me) appears to block errors. Second, in support of the entrenchment hypothesis, repeated occurrence of particular items in particular constructions (e.g., giggle in the intransitive construction) appears to contribute to an ever strengthening probabilistic inference that non-attested uses (e.g., *Don't giggle me) are ungrammatical for adult speakers. That is, both the rated acceptability and production probability of particular errors decline with increasing frequency of pre-empting and entrenching forms in the input. Third, learners appear to acquire semantic and morphophonological constraints on particular constructions, conceptualized as properties of slots in constructions [e.g., the (VERB) slot in the morphological un-(VERB) construction or the transitive-causative (SUBJECT) (VERB) (OBJECT) argument-structure construction]. Errors occur as children acquire the fine-grained semantic and morphophonological properties of particular items and construction slots, and so become increasingly reluctant to use items in slots with which they are incompatible. Findings also suggest some role for adult feedback and conventionality; the principle that, for many given meanings, there is a conventional form that is used by all members of the speech community.
  • Ambridge, B., Kidd, E., Rowland, C. F., & Theakston, A. L. (2015). The ubiquity of frequency effects in first language acquisition. Journal of Child Language, 42(2), 239-273. doi:10.1017/S030500091400049X.

    Abstract

    This review article presents evidence for the claim that frequency effects are pervasive in children's first language acquisition, and hence constitute a phenomenon that any successful account must explain. The article is organized around four key domains of research: children's acquisition of single words, inflectional morphology, simple syntactic constructions, and more advanced constructions. In presenting this evidence, we develop five theses. (i) There exist different types of frequency effect, from effects at the level of concrete lexical strings to effects at the level of abstract cues to thematic-role assignment, as well as effects of both token and type, and absolute and relative, frequency. High-frequency forms are (ii) early acquired and (iii) prevent errors in contexts where they are the target, but also (iv) cause errors in contexts in which a competing lower-frequency form is the target. (v) Frequency effects interact with other factors (e.g. serial position, utterance length), and the patterning of these interactions is generally informative with regard to the nature of the learning mechanism. We conclude by arguing that any successful account of language acquisition, from whatever theoretical standpoint, must be frequency sensitive to the extent that it can explain the effects documented in this review, and outline some types of account that do and do not meet this criterion.

    Additional information

    Author's response
  • Ameka, F. K. (2001). Ideophones and the nature of the adjective word class in Ewe. In F. K. E. Voeltz, & C. Kilian-Hatz (Eds.), Ideophones (pp. 25-48). Amsterdam: Benjamins.
  • Ameka, F. K. (2001). Ewe. In J. Garry, & C. Rubino (Eds.), Facts about the world’s languages: An encyclopedia of the world's major languages past and present (pp. 207-213). New York: H.W. Wilson Press.
  • Ameka, F. K., & Essegbey, J. (2013). Serialising languages: Satellite-framed, verb-framed or neither. Ghana Journal of Linguistics, 2(1), 19-38.

    Abstract

    The diversity in the coding of the core schema of motion, i.e., Path, has led to a traditional typology of languages into verb-framed and satellite-framed languages. In the former Path is encoded in verbs and in the latter it is encoded in non-verb elements that function as sisters to co-event expressing verbs such as manner verbs. Verb serializing languages pose a challenge to this typology as they express Path as well as the Co-event of manner in finite verbs that together function as a single predicate in translational motion clause. We argue that these languages do not fit in the typology and constitute a type of their own. We draw on data from Akan and Frog story narrations in Ewe, a Kwa language, and Sranan, a Caribbean Creole with Gbe substrate, to show that in terms of discourse properties verb serializing languages behave like Verb-framed with respect to some properties and like Satellite-framed languages in terms of others. This study fed into the revision of the typology and such languages are now said to be equipollently-framed languages.
  • Ameka, F. K. (2013). Possessive constructions in Likpe (Sɛkpɛlé). In A. Aikhenvald, & R. Dixon (Eds.), Possession and ownership: A crosslinguistic typology (pp. 224-242). Oxford: Oxford University Press.
  • Anderson, P., Harandi, N. M., Moisik, S. R., Stavness, I., & Fels, S. (2015). A comprehensive 3D biomechanically-driven vocal tract model including inverse dynamics for speech research. In Proceedings of Interspeech 2015: The 16th Annual Conference of the International Speech Communication Association (pp. 2395-2399).

    Abstract

    We introduce a biomechanical model of oropharyngeal structures that adds the soft-palate, pharynx, and larynx to our previous models of jaw, skull, hyoid, tongue, and face in a unified model. The model includes a comprehensive description of the upper airway musculature, using point-to-point muscles that may either be embedded within the deformable structures or operate exter- nally. The airway is described by an air-tight mesh that fits and deforms with the surrounding articulators, which enables dynamic coupling to our articulatory speech synthesizer. We demonstrate that the biomechanics, in conjunction with the skinning, supports a range from physically realistic to simplified vocal tract geometries to investigate different approaches to aeroacoustic modeling of vocal tract. Furthermore, our model supports inverse modeling to support investigation of plausible muscle activation patterns to generate speech.
  • Andics, A. (2013). Who is talking? Behavioural and neural evidence for norm-based coding in voice identity learning. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Andics, A., Gál, V., Vicsi, K., Rudas, G., & Vidnyánszky, Z. (2013). FMRI repetition suppression for voices is modulated by stimulus expectations. NeuroImage, 69, 277-283. doi:10.1016/j.neuroimage.2012.12.033.

    Abstract

    According to predictive coding models of sensory processing, stimulus expectations have a profound effect on sensory cortical responses. This was supported by experimental results, showing that fMRI repetition suppression (fMRI RS) for face stimuli is strongly modulated by the probability of stimulus repetitions throughout the visual cortical processing hierarchy. To test whether processing of voices is also affected by stimulus expectations, here we investigated the effect of repetition probability on fMRI RS in voice-selective cortical areas. Changing (‘alt’) and identical (‘rep’) voice stimulus pairs were presented to the listeners in blocks, with a varying probability of alt and rep trials across blocks. We found auditory fMRI RS in the nonprimary voice-selective cortical regions, including the bilateral posterior STS, the right anterior STG and the right IFC, as well as in the IPL. Importantly, fMRI RS effects in all of these areas were strongly modulated by the probability of stimulus repetition: auditory fMRI RS was reduced or not present in blocks with low repetition probability. Our results revealed that auditory fMRI RS in higher-level voice-selective cortical regions is modulated by repetition probabilities and thus suggest that in audition, similarly to the visual modality, processing of sensory information is shaped by stimulus expectation processes.
  • Andics, A., McQueen, J. M., & Petersson, K. M. (2013). Mean-based neural coding of voices. NeuroImage, 79, 351-360. doi:10.1016/j.neuroimage.2013.05.002.

    Abstract

    The social significance of recognizing the person who talks to us is obvious, but the neural mechanisms that mediate talker identification are unclear. Regions along the bilateral superior temporal sulcus (STS) and the inferior frontal cortex (IFC) of the human brain are selective for voices, and they are sensitive to rapid voice changes. Although it has been proposed that voice recognition is supported by prototype-centered voice representations, the involvement of these category-selective cortical regions in the neural coding of such "mean voices" has not previously been demonstrated. Using fMRI in combination with a voice identity learning paradigm, we show that voice-selective regions are involved in the mean-based coding of voice identities. Voice typicality is encoded on a supra-individual level in the right STS along a stimulus-dependent, identity-independent (i.e., voice-acoustic) dimension, and on an intra-individual level in the right IFC along a stimulus-independent, identity-dependent (i.e., voice identity) dimension. Voice recognition therefore entails at least two anatomically separable stages, each characterized by neural mechanisms that reference the central tendencies of voice categories.
  • Araújo, S., Faísca, L., Bramão, I., Petersson, K. M., & Reis, A. (2014). Lexical and phonological processes in dyslexic readers: Evidences from a visual lexical decision task. Dyslexia, 20, 38-53. doi:10.1002/dys.1461.

    Abstract

    The aim of the present study was to investigate whether reading failure in the context of an orthography of intermediate consistency is linked to inefficient use of the lexical orthographic reading procedure. The performance of typically developing and dyslexic Portuguese-speaking children was examined in a lexical decision task, where the stimulus lexicality, word frequency and length were manipulated. Both lexicality and length effects were larger in the dyslexic group than in controls, although the interaction between group and frequency disappeared when the data were transformed to control for general performance factors. Children with dyslexia were influenced in lexical decision making by the stimulus length of words and pseudowords, whereas age-matched controls were influenced by the length of pseudowords only. These findings suggest that non-impaired readers rely mainly on lexical orthographic information, but children with dyslexia preferentially use the phonological decoding procedure—albeit poorly—most likely because they struggle to process orthographic inputs as a whole such as controls do. Accordingly, dyslexic children showed significantly poorer performance than controls for all types of stimuli, including words that could be considered over-learned, such as high-frequency words. This suggests that their orthographic lexical entries are less established in the orthographic lexicon
  • Araújo, S., Faísca, L., Bramão, I., Reis, A., & Petersson, K. M. (2015). Lexical and sublexical orthographic processing: An ERP study with skilled and dyslexic adult readers. Brain and Language, 141, 16-27. doi:10.1016/j.bandl.2014.11.007.

    Abstract

    This ERP study investigated the cognitive nature of the P1–N1 components during orthographic processing. We used an implicit reading task with various types of stimuli involving different amounts of sublexical or lexical orthographic processing (words, pseudohomophones, pseudowords, nonwords, and symbols), and tested average and dyslexic readers. An orthographic regularity effect (pseudowords– nonwords contrast) was observed in the average but not in the dyslexic group. This suggests an early sensitivity to the dependencies among letters in word-forms that reflect orthographic structure, while the dyslexic brain apparently fails to be appropriately sensitive to these complex features. Moreover, in the adults the N1-response may already reflect lexical access: (i) the N1 was sensitive to the familiar vs. less familiar orthographic sequence contrast; (ii) and early effects of the phonological form (words-pseudohomophones contrast) were also found. Finally, the later N320 component was attenuated in the dyslexics, suggesting suboptimal processing in later stages of phonological analysis.
  • Araújo, S., Reis, A., Petersson, K. M., & Faísca, L. (2015). Rapid automatized naming and reading performance: A meta-analysis. Journal of Educational Psychology, 107(3), 868-883. doi:10.1037/edu0000006.

    Abstract

    Evidence that rapid naming skill is associated with reading ability has become increasingly prevalent in recent years. However, there is considerable variation in the literature concerning the magnitude of this relationship. The objective of the present study was to provide a comprehensive analysis of the evidence on the relationship between rapid automatized naming (RAN) and reading performance. To this end, we conducted a meta-analysis of the correlational relationship between these 2 constructs to (a) determine the overall strength of the RAN–reading association and (b) identify variables that systematically moderate this relationship. A random-effects model analysis of data from 137 studies (857 effect sizes; 28,826 participants) indicated a moderate-to-strong relationship between RAN and reading performance (r = .43, I2 = 68.40). Further analyses revealed that RAN contributes to the 4 measures of reading (word reading, text reading, non-word reading, and reading comprehension), but higher coefficients emerged in favor of real word reading and text reading. RAN stimulus type and type of reading score were the factors with the greatest moderator effect on the magnitude of the RAN–reading relationship. The consistency of orthography and the subjects’ grade level were also found to impact this relationship, although the effect was contingent on reading outcome. It was less evident whether the subjects’ reading proficiency played a role in the relationship. Implications for future studies are discussed.
  • Arnon, I., Casillas, M., Kurumada, C., & Estigarribia, B. (Eds.). (2014). Language in interaction: Studies in honor of Eve V. Clark. Amsterdam: Benjamins.

    Abstract

    Understanding how communicative goals impact and drive the learning process has been a long-standing issue in the field of language acquisition. Recent years have seen renewed interest in the social and pragmatic aspects of language learning: the way interaction shapes what and how children learn. In this volume, we bring together researchers working on interaction in different domains to present a cohesive overview of ongoing interactional research. The studies address the diversity of the environments children learn in; the role of para-linguistic information; the pragmatic forces driving language learning; and the way communicative pressures impact language use and change. Using observational, empirical and computational findings, this volume highlights the effect of interpersonal communication on what children hear and what they learn. This anthology is inspired by and dedicated to Prof. Eve V. Clark – a pioneer in all matters related to language acquisition – and a major force in establishing interaction and communication as crucial aspects of language learning.
  • Asaridou, S. S., Hagoort, P., & McQueen, J. M. (2015). Effects of early bilingual experience with a tone and a non-tone language on speech-music. PLoS One, 10(12): e0144225. doi:10.1371/journal.pone.0144225.

    Abstract

    We investigated music and language processing in a group of early bilinguals who spoke a tone language and a non-tone language (Cantonese and Dutch). We assessed online speech-music processing interactions, that is, interactions that occur when speech and music are processed simultaneously in songs, with a speeded classification task. In this task, participants judged sung pseudowords either musically (based on the direction of the musical interval) or phonologically (based on the identity of the sung vowel). We also assessed longer-term effects of linguistic experience on musical ability, that is, the influence of extensive prior experience with language when processing music. These effects were assessed with a task in which participants had to learn to identify musical intervals and with four pitch-perception tasks. Our hypothesis was that due to their experience in two different languages using lexical versus intonational tone, the early Cantonese-Dutch bilinguals would outperform the Dutch control participants. In online processing, the Cantonese-Dutch bilinguals processed speech and music more holistically than controls. This effect seems to be driven by experience with a tone language, in which integration of segmental and pitch information is fundamental. Regarding longer-term effects of linguistic experience, we found no evidence for a bilingual advantage in either the music-interval learning task or the pitch-perception tasks. Together, these results suggest that being a Cantonese-Dutch bilingual does not have any measurable longer-term effects on pitch and music processing, but does have consequences for how speech and music are processed jointly.

    Additional information

    Data Availability
  • Asaridou, S. S. (2015). An ear for pitch: On the effects of experience and aptitude in processing pitch in language and music. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Asaridou, S. S., & McQueen, J. M. (2013). Speech and music shape the listening brain: Evidence for shared domain-general mechanisms. Frontiers in Psychology, 4: 321. doi:10.3389/fpsyg.2013.00321.

    Abstract

    Are there bi-directional influences between speech perception and music perception? An answer to this question is essential for understanding the extent to which the speech and music that we hear are processed by domain-general auditory processes and/or by distinct neural auditory mechanisms. This review summarizes a large body of behavioral and neuroscientific findings which suggest that the musical experience of trained musicians does modulate speech processing, and a sparser set of data, largely on pitch processing, which suggest in addition that linguistic experience, in particular learning a tone language, modulates music processing. Although research has focused mostly on music on speech effects, we argue that both directions of influence need to be studied, and conclude that the picture which thus emerges is one of mutual interaction across domains. In particular, it is not simply that experience with spoken language has some effects on music perception, and vice versa, but that because of shared domain-general subcortical and cortical networks, experiences in both domains influence behavior in both domains.
  • Athanasopoulos, P., Bylund, E., Montero-Melis, G., Damjanovic, L., Schartner, A., Kibbe, A., Riches, N., & Thierry, G. (2015). Two languages, two minds: Flexible cognitive processing driven by language of operation. Psychological Science, 26(4), 518-526. doi:10.1177/0956797614567509.

    Abstract

    People make sense of objects and events around them by classifying them into identifiable categories. The extent to which language affects this process has been the focus of a long-standing debate: Do different languages cause their speakers to behave differently? Here, we show that fluent German-English bilinguals categorize motion events according to the grammatical constraints of the language in which they operate. First, as predicted from cross-linguistic differences in motion encoding, bilingual participants functioning in a German testing context prefer to match events on the basis of motion completion to a greater extent than do bilingual participants in an English context. Second, when bilingual participants experience verbal interference in English, their categorization behavior is congruent with that predicted for German; when bilingual participants experience verbal interference in German, their categorization becomes congruent with that predicted for English. These findings show that language effects on cognition are context-bound and transient, revealing unprecedented levels of malleability in human cognition.

    Files private

    Request files
  • Ayub, Q., Yngvadottir, B., Chen, Y., Xue, Y., Hu, M., Vernes, S. C., Fisher, S. E., & Tyler-Smith, C. (2013). FOXP2 targets show evidence of positive selection in European populations. American Journal of Human Genetics, 92, 696-706. doi:10.1016/j.ajhg.2013.03.019.

    Abstract

    Forkhead box P2 (FOXP2) is a highly conserved transcription factor that has been implicated in human speech and language disorders and plays important roles in the plasticity of the developing brain. The pattern of nucleotide polymorphisms in FOXP2 in modern populations suggests that it has been the target of positive (Darwinian) selection during recent human evolution. In our study, we searched for evidence of selection that might have followed FOXP2 adaptations in modern humans. We examined whether or not putative FOXP2 targets identified by chromatin-immunoprecipitation genomic screening show evidence of positive selection. We developed an algorithm that, for any given gene list, systematically generates matched lists of control genes from the Ensembl database, collates summary statistics for three frequency-spectrum-based neutrality tests from the low-coverage resequencing data of the 1000 Genomes Project, and determines whether these statistics are significantly different between the given gene targets and the set of controls. Overall, there was strong evidence of selection of FOXP2 targets in Europeans, but not in the Han Chinese, Japanese, or Yoruba populations. Significant outliers included several genes linked to cellular movement, reproduction, development, and immune cell trafficking, and 13 of these constituted a significant network associated with cardiac arteriopathy. Strong signals of selection were observed for CNTNAP2 and RBFOX1, key neurally expressed genes that have been consistently identified as direct FOXP2 targets in multiple studies and that have themselves been associated with neurodevelopmental disorders involving language dysfunction.
  • Azar, Z., & Ozyurek, A. (2015). Discourse Management: Reference tracking in speech and gesture in Turkish narratives. Dutch Journal of Applied Linguistics, 4(2), 222-240. doi:10.1075/dujal.4.2.06aza.

    Abstract

    Speakers achieve coherence in discourse by alternating between differential lexical forms e.g. noun phrase, pronoun, and null form in accordance with the accessibility of the entities they refer to, i.e. whether they introduce an entity into discourse for the first time or continue referring to an entity they already mentioned before. Moreover, tracking of entities in discourse is a multimodal phenomenon. Studies show that speakers are sensitive to the informational structure of discourse and use fuller forms (e.g. full noun phrases) in speech and gesture more when re-introducing an entity while they use attenuated forms (e.g. pronouns) in speech and gesture less when maintaining a referent. However, those studies focus mainly on non-pro-drop languages (e.g. English, German and French). The present study investigates whether the same pattern holds for pro-drop languages. It draws data from adult native speakers of Turkish using elicited narratives. We find that Turkish speakers mostly use fuller forms to code subject referents in re-introduction context and the null form in maintenance context and they point to gesture space for referents more in re-introduction context compared maintained context. Hence we provide supportive evidence for the reverse correlation between the accessibility of a discourse referent and its coding in speech and gesture. We also find that, as a novel contribution, third person pronoun is used in re-introduction context only when the referent was previously mentioned as the object argument of the immediately preceding clause.
  • Baayen, R. H. (2014). Productivity in language production. In D. Sandra, & M. Taft (Eds.), Morphological Structure, Lexical Representation and Lexical Access: A Special Issue of Language and Cognitive Processes (pp. 447-469). London: Routledge.

    Abstract

    Lexical statistics and a production experiment are used to gauge the extent to which the linguistic notion of morphological productivity is relevant for psycholinguistic theories of speech production in languages such as Dutch and English. Lexical statistics of productivity show that despite the relatively poor morphology of Dutch, new words are created often enough for the marginalisation of word formation in theories of speech production to be theoretically unattractive. This conclusion is supported by the results of a production experiment in which subjects freely created hundreds of productive, but only a handful of unproductive, neologisms. A tentative solution is proposed as to why the opposite pattern has been observed in the speech of jargonaphasics.
  • Baggio, G., van Lambalgen, M., & Hagoort, P. (2015). Logic as Marr's computational level: Four case studies. Topics in Cognitive Science, 7, 287-298. doi:10.1111/tops.12125.

    Abstract

    We sketch four applications of Marr's levels-of-analysis methodology to the relations between logic and experimental data in the cognitive neuroscience of language and reasoning. The first part of the paper illustrates the explanatory power of computational level theories based on logic. We show that a Bayesian treatment of the suppression task in reasoning with conditionals is ruled out by EEG data, supporting instead an analysis based on defeasible logic. Further, we describe how results from an EEG study on temporal prepositions can be reanalyzed using formal semantics, addressing a potential confound. The second part of the article demonstrates the predictive power of logical theories drawing on EEG data on processing progressive constructions and on behavioral data on conditional reasoning in people with autism. Logical theories can constrain processing hypotheses all the way down to neurophysiology, and conversely neuroscience data can guide the selection of alternative computational level models of cognition.
  • Bakker, I., Takashima, A., Van Hall, J. G., & McQueen, J. M. (2015). Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of cognitive neuroscience, 27(7), 1286-1297. doi:10.1162/jocn_a_00801.

    Abstract

    The complementary learning systems account of word learning states that novel words, like other types of memories, undergo an offline consolidation process during which they are gradually integrated into the neocortical memory network. A fundamental change in the neural representation of a novel word should therefore occur in the hours after learning. The present EEG study tested this hypothesis by investigating whether novel words learned before a 24-hr consolidation period elicited more word-like oscillatory responses than novel words learned immediately before testing. In line with previous studies indicating that theta synchronization reflects lexical access, unfamiliar novel words elicited lower power in the theta band (4–8 Hz) than existing words. Recently learned words still showed a marginally lower theta increase than existing words, but theta responses to novel words that had been acquired 24 hr earlier were indistinguishable from responses to existing words. Consistent with evidence that beta desynchronization (16–21 Hz) is related to lexical-semantic processing, we found that both unfamiliar and recently learned novel words elicited less beta desynchronization than existing words. In contrast, no difference was found between novel words learned 24 hr earlier and existing words. These data therefore suggest that an offline consolidation period enables novel words to acquire lexically integrated, word-like neural representations.
  • Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. (2014). Competition from unseen or unheard novel words: Lexical consolidation across modalities. Journal of Memory and Language, 73, 116-139. doi:10.1016/j.jml.2014.03.002.

    Abstract

    In four experiments we investigated the formation of novel word memories across modalities, using competition between novel words and their existing phonological/orthographic neighbours as a test of lexical integration. Auditorily acquired novel words entered into competition both in the spoken modality (Experiment 1) and in the written modality (Experiment 4) after a consolidation period of 24 h. Words acquired from print, on the other hand, showed competition effects after 24 h in a visual word recognition task (Experiment 3) but required additional training and a consolidation period of a week before entering into spoken-word competition (Experiment 2). These cross-modal effects support the hypothesis that lexicalised rather than episodic representations underlie post-consolidation competition effects. We suggest that sublexical phoneme–grapheme conversion during novel word encoding and/or offline consolidation enables the formation of modality-specific lexemes in the untrained modality, which subsequently undergo the same cortical integration process as explicitly perceived word forms in the trained modality. Although conversion takes place in both directions, speech input showed an advantage over print both in terms of lexicalisation and explicit memory performance. In conclusion, the brain is able to integrate and consolidate internally generated lexical information as well as external perceptual input.
  • Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. (2015). Tracking lexical consolidation with ERPs: Lexical and semantic-priming effects on N400 and LPC responses to newly-learned words. Neuropsychologia, 79, 33-41. doi:10.1016/j.neuropsychologia.2015.10.020.
  • Bank, R., Crasborn, O., & Van Hout, R. (2015). Alignment of two languages: The spreading of mouthings in Sign Language of the Netherlands. International Journal of Bilingualism, 19, 40-55. doi:10.1177/1367006913484991.

    Abstract

    Mouthings and mouth gestures are omnipresent in Sign Language of the Netherlands (NGT). Mouthings in NGT are mouth actions that have their origin in spoken Dutch, and are usually time aligned with the signs they co-occur with. Frequently, however, they spread over one or more adjacent signs, so that one mouthing co-occurs with multiple manual signs. We conducted a corpus study to explore how frequently this occurs in NGT and whether there is any sociolinguistic variation in the use of spreading. Further, we looked at the circumstances under which spreading occurs. Answers to these questions may give us insight into the prosodic structure of sign languages. We investigated a sample of the Corpus NGT containing 5929 mouthings by 46 participants. We found that spreading over an adjacent sign is independent of social factors. Further, mouthings that spread are longer than non-spreading mouthings, whether measured in syllables or in milliseconds. By using a relatively large amount of natural data, we succeeded in gaining more insight into the way mouth actions are utilised in sign languages
  • Bank, R. (2015). The ubiquity of mouthings in NGT: A corpus study. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Baranova, J. (2015). Other-initiated repair in Russian. Open linguistics, 1(1), 555-577. doi:10.1515/opli-2015-0019.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with otherinitiated repair, as observed in a corpus of video-recorded conversations in Russian. In the discussion of various repair cases special attention is given to the modifications that the trouble source turn undergoes in response to an open versus a restricted repair initiation. Speakers often modify their problematic turn in multiple ways at ones when responding to an open repair initiation. They can alter the word order of the problematic turn, change prosodic contour of the utterance, omit redundant elements and add more specific ones. By contrast, restricted repair initiations usually receive specific repair solutions that target only one problem at a time
  • Barendse, M. T. (2015). Dimensionality assessment with factor analysis methods. PhD Thesis, University of Groningen, Groningen.
  • Barendse, M. T., Albers, C. J., Oort, F. J., & Timmerman, M. E. (2014). Measurement bias detection through Bayesian factor analysis. Frontiers in Psychology, 5: 1087. doi:10.3389/fpsyg.2014.01087.

    Abstract

    Measurement bias has been defined as a violation of measurement invariance. Potential violators—variables that possibly violate measurement invariance—can be investigated through restricted factor analysis (RFA). The purpose of the present paper is to investigate a Bayesian approach to estimate RFA models with interaction effects, in order to detect uniform and nonuniform measurement bias. Because modeling nonuniform bias requires an interaction term, it is more complicated than modeling uniform bias. The Bayesian approach seems especially suited for such complex models. In a simulation study we vary the type of bias (uniform, nonuniform), the type of violator (observed continuous, observed dichotomous, latent continuous), and the correlation between the trait and the violator (0.0, 0.5). For each condition, 100 sets of data are generated and analyzed. We examine the accuracy of the parameter estimates and the performance of two bias detection procedures, based on the DIC fit statistic, in Bayesian RFA. Results show that the accuracy of the estimated parameters is satisfactory. Bias detection rates are high in all conditions with an observed violator, and still satisfactory in all other conditions.
  • Barendse, M. T., Oort, F. J., Jak, S., & Timmerman, M. E. (2013). Multilevel exploratory factor analysis of discrete data. Netherlands Journal of Psychology, 67(4), 114-121.
  • Barendse, M. T., Oort, F. J., & Timmerman, M. E. (2015). Using exploratory factor analysis to determine the dimensionality of discrete responses. Structural Equation Modeling: A Multidisciplinary Journal, 22(1), 87-101. doi:10.1080/10705511.2014.934850.

    Abstract

    Exploratory factor analysis (EFA) is commonly used to determine the dimensionality of continuous data. In a simulation study we investigate its usefulness with discrete data. We vary response scales (continuous, dichotomous, polytomous), factor loadings (medium, high), sample size (small, large), and factor structure (simple, complex). For each condition, we generate 1,000 data sets and apply EFA with 5 estimation methods (maximum likelihood [ML] of covariances, ML of polychoric correlations, robust ML, weighted least squares [WLS], and robust WLS) and 3 fit criteria (chi-square test, root mean square error of approximation, and root mean square residual). The various EFA procedures recover more factors when sample size is large, factor loadings are high, factor structure is simple, and response scales have more options. Robust WLS of polychoric correlations is the preferred method, as it is theoretically justified and shows fewer convergence problems than the other estimation methods.
  • Baron-Cohen, S., Murphy, L., Chakrabarti, B., Craig, I., Mallya, U., Lakatosova, S., Rehnstrom, K., Peltonen, L., Wheelwright, S., Allison, C., Fisher, S. E., & Warrier, V. (2014). A genome wide association study of mathematical ability reveals an association at chromosome 3q29, a locus associated with autism and learning difficulties: A preliminary study. PLoS One, 9(5): e96374. doi:10.1371/journal.pone.0096374.

    Abstract

    Mathematical ability is heritable, but few studies have directly investigated its molecular genetic basis. Here we aimed to identify specific genetic contributions to variation in mathematical ability. We carried out a genome wide association scan using pooled DNA in two groups of U.K. samples, based on end of secondary/high school national academic exam achievement: high (n = 419) versus low (n = 183) mathematical ability while controlling for their verbal ability. Significant differences in allele frequencies between these groups were searched for in 906,600 SNPs using the Affymetrix GeneChip Human Mapping version 6.0 array. After meeting a threshold of p<1.5×10−5, 12 SNPs from the pooled association analysis were individually genotyped in 542 of the participants and analyzed to validate the initial associations (lowest p-value 1.14 ×10−6). In this analysis, one of the SNPs (rs789859) showed significant association after Bonferroni correction, and four (rs10873824, rs4144887, rs12130910 rs2809115) were nominally significant (lowest p-value 3.278 × 10−4). Three of the SNPs of interest are located within, or near to, known genes (FAM43A, SFT2D1, C14orf64). The SNP that showed the strongest association, rs789859, is located in a region on chromosome 3q29 that has been previously linked to learning difficulties and autism. rs789859 lies 1.3 kbp downstream of LSG1, and 700 bp upstream of FAM43A, mapping within the potential promoter/regulatory region of the latter. To our knowledge, this is only the second study to investigate the association of genetic variants with mathematical ability, and it highlights a number of interesting markers for future study.
  • Baron-Cohen, S., Johnson, D., Asher, J. E., Wheelwright, S., Fisher, S. E., Gregersen, P. K., & Allison, C. (2013). Is synaesthesia more common in autism? Molecular Autism, 4(1): 40. doi:10.1186/2040-2392-4-40.

    Abstract

    BACKGROUND:
    Synaesthesia is a neurodevelopmental condition in which a sensation in one modality triggers a perception in a second modality. Autism (shorthand for Autism Spectrum Conditions) is a neurodevelopmental condition involving social-communication disability alongside resistance to change and unusually narrow interests or activities. Whilst on the surface they appear distinct, they have been suggested to share common atypical neural connectivity.

    METHODS:
    In the present study, we carried out the first prevalence study of synaesthesia in autism to formally test whether these conditions are independent. After exclusions, 164 adults with autism and 97 controls completed a synaesthesia questionnaire, autism spectrum quotient, and test of genuineness-revised (ToG-R) online.

    RESULTS:
    The rate of synaesthesia in adults with autism was 18.9% (31 out of 164), almost three times greater than in controls (7.22%, 7 out of 97, P <0.05). ToG-R proved unsuitable for synaesthetes with autism.

    CONCLUSIONS:
    The significant increase in synaesthesia prevalence in autism suggests that the two conditions may share some common underlying mechanisms. Future research is needed to develop more feasible validation methods of synaesthesia in autism.

    Files private

    Request files
  • Basnakova, J., Weber, K., Petersson, K. M., Van Berkum, J. J. A., & Hagoort, P. (2014). Beyond the language given: The neural correlates of inferring speaker meaning. Cerebral Cortex, 24(10), 2572-2578. doi:10.1093/cercor/bht112.

    Abstract

    Even though language allows us to say exactly what we mean, we often use language to say things indirectly, in a way that depends on the specific communicative context. For example, we can use an apparently straightforward sentence like "It is hard to give a good presentation" to convey deeper meanings, like "Your talk was a mess!" One of the big puzzles in language science is how listeners work out what speakers really mean, which is a skill absolutely central to communication. However, most neuroimaging studies of language comprehension have focused on the arguably much simpler, context-independent process of understanding direct utterances. To examine the neural systems involved in getting at contextually constrained indirect meaning, we used functional magnetic resonance imaging as people listened to indirect replies in spoken dialog. Relative to direct control utterances, indirect replies engaged dorsomedial prefrontal cortex, right temporo-parietal junction and insula, as well as bilateral inferior frontal gyrus and right medial temporal gyrus. This suggests that listeners take the speaker's perspective on both cognitive (theory of mind) and affective (empathy-like) levels. In line with classic pragmatic theories, our results also indicate that currently popular "simulationist" accounts of language comprehension fail to explain how listeners understand the speaker's intended message.
  • Bašnákova, J., Van Berkum, J. J. A., Weber, K., & Hagoort, P. (2015). A job interview in the MRI scanner: How does indirectness affect addressees and overhearers? Neuropsychologia, 76, 79-91. doi:10.1016/j.neuropsychologia.2015.03.030.

    Abstract

    In using language, people not only exchange information, but also navigate their social world – for example, they can express themselves indirectly to avoid losing face. In this functional magnetic resonance imaging study, we investigated the neural correlates of interpreting face-saving indirect replies, in a situation where participants only overheard the replies as part of a conversation between two other people, as well as in a situation where the participants were directly addressed themselves. We created a fictional job interview context where indirect replies serve as a natural communicative strategy to attenuate one’s shortcomings, and asked fMRI participants to either pose scripted questions and receive answers from three putative job candidates (addressee condition) or to listen to someone else interview the same candidates (overhearer condition). In both cases, the need to evaluate the candidate ensured that participants had an active interest in comprehending the replies. Relative to direct replies, face-saving indirect replies increased activation in medial prefrontal cortex, bilateral temporo-parietal junction (TPJ), bilateral inferior frontal gyrus and bilateral middle temporal gyrus, in active overhearers and active addressees alike, with similar effect size, and comparable to findings obtained in an earlier passive listening study (Bašnáková et al., 2013). In contrast, indirectness effects in bilateral anterior insula and pregenual ACC, two regions implicated in emotional salience and empathy, were reliably stronger in addressees than in active overhearers. Our findings indicate that understanding face-saving indirect language requires additional cognitive perspective-taking and other discourse-relevant cognitive processing, to a comparable extent in active overhearers and addressees. Furthermore, they indicate that face-saving indirect language draws upon affective systems more in addressees than in overhearers, presumably because the addressee is the one being managed by a face-saving reply. In all, face-saving indirectness provides a window on the cognitive as well as affect-related neural systems involved in human communication.
  • Bastiaansen, M. C. M., Böcker, K. B. E., Brunia, C. H. M., De Munck, J. C., & Spekreijse, H. (2001). Desynchronization during anticipatory attention for an upcoming stimulus: A comparative EEG/MEG study. Clinical Neurophysiology, 112, 393-403.

    Abstract

    Objectives: Our neurophysiological model of anticipatory behaviour (e.g. Acta Psychol 101 (1999) 213; Bastiaansen et al., 1999a) predicts an activation of (primary) sensory cortex during anticipatory attention for an upcoming stimulus. In this paper we attempt to demonstrate this by means of event-related desynchronization (ERD). Methods: Five subjects performed a time estimation task, and were informed about the quality of their time estimation by either visual or auditory stimuli providing Knowledge of Results (KR). EEG and MEG were recorded in separate sessions, and ERD was computed in the 8± 10 and 10±12 Hz frequency bands for both datasets. Results: Both in the EEG and the MEG we found an occipitally maximal ERD preceding the visual KR for all subjects. Preceding the auditory KR, no ERD was present in the EEG, whereas in the MEG we found an ERD over the temporal cortex in two of the 5 subjects. These subjects were also found to have higher levels of absolute power over temporal recording sites in the MEG than the other subjects, which we consider to be an indication of the presence of a `tau' rhythm (e.g. Neurosci Lett 222 (1997) 111). Conclusions: It is concluded that the results are in line with the predictions of our neurophysiological model.
  • Bastiaansen, M. C. M., & Brunia, C. H. M. (2001). Anticipatory attention: An event-related desynchronization approach. International Journal of Psychophysiology, 43, 91-107.

    Abstract

    This paper addresses the question of whether anticipatory attention - i.e. attention directed towards an upcoming stimulus in order to facilitate its processing - is realized at the neurophysiological level by a pre-stimulus desynchronization of the sensory cortex corresponding to the modality of the anticipated stimulus, reflecting then opening of a thalamocortical gate in the relevant sensory modality. It is argued that a technique called Event-Related Desynchronization (ERD) of rhythmic 10-Hz activity is well suited to study the thalamocortical processes that are thought to mediate anticipatory attention. In a series of experiments, ERD was computed on EEG and MEG data, recorded while subjects performed a time estimation task and were informed about the quality of their time estimation by stimuli providing Knowledge of Results (KR). The modality of the KR stimuli (auditory, visual, or somatosensory) was manipulated both within and between experiments. The results indicate to varying degrees that preceding the presentation of the KR stimuli, ERD is present over the sensory cortex, which corresponds to the modality of the KR stimulus. The general pattern of results supports the notion that a thalamocortical gating mechanism forms the neurophysiological basis of anticipatory attention. Furthermore, the results support the notion that Event-Related Potential(ERP) and ERD measures reflect fundamentally different neurophysiological processes.
  • Bastiaansen, M. C. M., & Hagoort, P. (2015). Frequency-based segregation of syntactic and semantic unification during online sentence level language comprehension. Journal of Cognitive Neuroscience, 27(11), 2095-2107. doi:10.1162/jocn_a_00829.

    Abstract

    During sentence level language comprehension, semantic and syntactic unification are functionally distinct operations. Nevertheless, both recruit roughly the same brain areas (spatially overlapping networks in the left frontotemporal cortex) and happen at the same time (in the first few hundred milliseconds after word onset). We tested the hypothesis that semantic and syntactic unification are segregated by means of neuronal synchronization of the functionally relevant networks in different frequency ranges: gamma (40 Hz and up) for semantic unification and lower beta (10–20 Hz) for syntactic unification. EEG power changes were quantified as participants read either correct sentences, syntactically correct though meaningless sentences (syntactic prose), or sentences that did not contain any syntactic structure (random word lists). Other sentences contained either a semantic anomaly or a syntactic violation at a critical word in the sentence. Larger EEG gamma-band power was observed for semantically coherent than for semantically anomalous sentences. Similarly, beta-band power was larger for syntactically correct sentences than for incorrect ones. These results confirm the existence of a functional dissociation in EEG oscillatory dynamics during sentence level language comprehension that is compatible with the notion of a frequency-based segregation of syntactic and semantic unification.
  • Bastos, A. M., Vezoli, J., Bosman, C. A., Schoffelen, J.-M., Oostenveld, R., Dowdall, J. R., De Weerd, P., Kennedy, H., & Fries, P. (2015). Visual areas exert feedforward and feedback influences through distinct frequency channels. Neuron, 85(2), 390-401. doi:10.1016/j.neuron.2014.12.018.

    Abstract

    Visual cortical areas subserve cognitive functions by interacting in both feedforward and feedback directions. While feedforward influences convey sensory signals, feedback influences modulate feedforward signaling according to the current behavioral context. We investigated whether these interareal influences are subserved differentially by rhythmic synchronization. We correlated frequency-specific directed influences among 28 pairs of visual areas with anatomical metrics of the feedforward or feedback character of the respective interareal projections. This revealed that in the primate visual system, feedforward influences are carried by theta-band ( approximately 4 Hz) and gamma-band ( approximately 60-80 Hz) synchronization, and feedback influences by beta-band ( approximately 14-18 Hz) synchronization. The functional directed influences constrain a functional hierarchy similar to the anatomical hierarchy, but exhibiting task-dependent dynamic changes in particular with regard to the hierarchical positions of frontal areas. Our results demonstrate that feedforward and feedback signaling use distinct frequency channels, suggesting that they subserve differential communication requirements.
  • Bauer, B. L. M. (2013). Impersonal verbs. In G. K. Giannakis (Ed.), Encyclopedia of Ancient Greek Language and Linguistics Online (pp. 197-198). Leiden: Brill. doi:10.1163/2214-448X_eagll_SIM_00000481.

    Abstract

    Impersonal verbs in Greek ‒ as in the other Indo-European languages ‒ exclusively feature 3rd person singular finite forms and convey one of three types of meaning: (a) meteorological conditions; (b) emotional and physical state/experience; (c) modality. In Greek, impersonal verbs predominantly convey meteorological conditions and modality.

    Impersonal verbs in Greek, as in the other Indo-European languages, exclusively feature 3rd person singular finite forms and convey one of three types of me…

    Files private

    Request files
  • Bauer, B. L. M. (2014). Indefinite HOMO in the Gospels of the Vulgata. In P. Molinell, P. Cuzzoli, & C. Fedriani (Eds.), Latin vulgaire – latin tardif X (pp. 415-435). Bergamo: Bergamo University Press.
  • Bauer, B. L. M. (2015). Origins of grammatical forms and evidence from Latin. Journal of Indo-European studies, 43, 201-235.

    Abstract

    This article submits that the instances of incipient grammaticalization that are found in the later stages of Latin and that resulted in new grammatical forms in Romance, reflect a major linguistic innovation. While the new grammatical forms are created out of lexical or mildly grammatical autonomous elements, earlier processes seem to primarily involve particles with a certain semantic value and freezing. This fundamental difference explains why the attempts of early Indo-Europeanists such as Franz Bopp at tracing the lexical origins of Indo-European inflected forms were unsuccessful and strongly criticized by the Neo-Grammarians.
  • Bauer, B. L. M. (2015). Origins of the indefinite HOMO constructions. In G. Haverling (Ed.), Latin Linguistics in the Early 21st Century: Acts of the 16th International Colloquium on Latin Linguistics (pp. 542-553). Uppsala: Uppsala University.
  • Bavin, E. L., Kidd, E., Prendergast, L., Baker, E., Dissanayake, C., & Prior, M. (2014). Severity of autism is related to children's language processing. Autism Research, 7(6), 687-694. doi:10.1002/aur.1410.

    Abstract

    Problems in language processing have been associated with autism spectrum disorder (ASD), with some research attributing the problems to overall language skills rather than a diagnosis of ASD. Lexical access was assessed in a looking-while-listening task in three groups of 5- to 7-year-old children; two had high-functioning ASD (HFA), an ASD severe (ASD-S) group (n = 16) and an ASD moderate (ASD-M) group (n = 21). The third group were typically developing (TD) (n = 48). Participants heard sentences of the form “Where's the x?” and their eye movements to targets (e.g., train), phonological competitors (e.g., tree), and distractors were recorded. Proportions of looking time at target were analyzed within 200 ms intervals. Significant group differences were found between the ASD-S and TD groups only, at time intervals 1000–1200 and 1200–1400 ms postonset. The TD group was more likely to be fixated on target. These differences were maintained after adjusting for language, verbal and nonverbal IQ, and attention scores. An analysis using parent report of autistic-like behaviors showed higher scores to be associated with lower proportions of looking time at target, regardless of group. Further analysis showed fixation for the TD group to be significantly faster than for the ASD-S. In addition, incremental processing was found for all groups. The study findings suggest that severity of autistic behaviors will impact significantly on children's language processing in real life situations when exposed to syntactically complex material. They also show the value of using online methods for understanding how young children with ASD process language. Autism Res 2014, 7: 687–694.
  • Becker, M., Devanna, P., Fisher, S. E., & Vernes, S. C. (2015). A chromosomal rearrangement in a child with severe speech and language disorder separates FOXP2 from a functional enhancer. Molecular Cytogenetics, 8: 69. doi:10.1186/s13039-015-0173-0.

    Abstract

    Mutations of FOXP2 in 7q31 cause a rare disorder involving speech apraxia, accompanied by expressive and receptive language impairments. A recent report described a child with speech and language deficits, and a genomic rearrangement affecting chromosomes 7 and 11. One breakpoint mapped to 7q31 and, although outside its coding region, was hypothesised to disrupt FOXP2 expression. We identified an element 2 kb downstream of this breakpoint with epigenetic characteristics of an enhancer. We show that this element drives reporter gene expression in human cell-lines. Thus, displacement of this element by translocation may disturb gene expression, contributing to the observed language phenotype.
  • Becker, R., Pefkou, M., Michel, C. M., & Hervais-Adelman, A. (2013). Left temporal alpha-band activity reflects single word intelligibility. Frontiers in Systems Neuroscience, 7: 121. doi:10.3389/fnsys.2013.00121.

    Abstract

    The electroencephalographic (EEG) correlates of degraded speech perception have been explored in a number of recent studies. However, such investigations have often been inconclusive as to whether observed differences in brain responses between conditions result from different acoustic properties of more or less intelligible stimuli or whether they relate to cognitive processes implicated in comprehending challenging stimuli. In this study we used noise vocoding to spectrally degrade monosyllabic words in order to manipulate their intelligibility. We used spectral rotation to generate incomprehensible control conditions matched in terms of spectral detail. We recorded EEG from 14 volunteers who listened to a series of noise vocoded (NV) and noise-vocoded spectrally-rotated (rNV) words, while they carried out a detection task. We specifically sought components of the EEG response that showed an interaction between spectral rotation and spectral degradation. This reflects those aspects of the brain electrical response that are related to the intelligibility of acoustically degraded monosyllabic words, while controlling for spectral detail. An interaction between spectral complexity and rotation was apparent in both evoked and induced activity. Analyses of event-related potentials showed an interaction effect for a P300-like component at several centro-parietal electrodes. Time-frequency analysis of the EEG signal in the alpha-band revealed a monotonic increase in event-related desynchronization (ERD) for the NV but not the rNV stimuli in the alpha band at a left temporo-central electrode cluster from 420-560 ms reflecting a direct relationship between the strength of alpha-band ERD and intelligibility. By matching NV words with their incomprehensible rNV homologues, we reveal the spatiotemporal pattern of evoked and induced processes involved in degraded speech perception, largely uncontaminated by purely acoustic effects.
  • Behrens, B., Flecken, M., & Carroll, M. (2013). Progressive Attraction: On the Use and Grammaticalization of Progressive Aspect in Dutch, Norwegian, and German. Journal of Germanic linguistics, 25(2), 95-136. doi:10.1017/S1470542713000020.

    Abstract

    This paper investigates the use of aspectual constructions in Dutch, Norwegian, and German, languages in which aspect marking that presents events explicitly as ongoing, is optional. Data were elicited under similar conditions with native speakers in the three countries. We show that while German speakers make insignificant use of aspectual constructions, usage patterns in Norwegian and Dutch present an interesting case of overlap, as well as differences, with respect to a set of factors that attract or constrain the use of different constructions. The results indicate that aspect marking is grammaticalizing in Dutch, but there are no clear signs of a similar process in Norwegian.*
  • Benyamin, B., St Pourcain, B., Davis, O. S., Davies, G., Hansell, N. K., Brion, M.-J., Kirkpatrick, R. M., Cents, R. A. M., Franić, S., Miller, M. B., Haworth, C. M. A., Meaburn, E., Price, T. S., Evans, D. M., Timpson, N., Kemp, J., Ring, S., McArdle, W., Medland, S. E., Yang, J. and 23 moreBenyamin, B., St Pourcain, B., Davis, O. S., Davies, G., Hansell, N. K., Brion, M.-J., Kirkpatrick, R. M., Cents, R. A. M., Franić, S., Miller, M. B., Haworth, C. M. A., Meaburn, E., Price, T. S., Evans, D. M., Timpson, N., Kemp, J., Ring, S., McArdle, W., Medland, S. E., Yang, J., Harris, S. E., Liewald, D. C., Scheet, P., Xiao, X., Hudziak, J. J., de Geus, E. J. C., Jaddoe, V. W. V., Starr, J. M., Verhulst, F. C., Pennell, C., Tiemeier, H., Iacono, W. G., Palmer, L. J., Montgomery, G. W., Martin, N. G., Boomsma, D. I., Posthuma, D., McGue, M., Wright, M. J., Davey Smith, G., Deary, I. J., Plomin, R., & Visscher, P. M. (2014). Childhood intelligence is heritable, highly polygenic and associated with FNBP1L. Molecular Psychiatry, 19(2), 253-258. doi:10.1038/mp.2012.184.

    Abstract

    Intelligence in childhood, as measured by psychometric cognitive tests, is a strong predictor of many important life outcomes, including educational attainment, income, health and lifespan. Results from twin, family and adoption studies are consistent with general intelligence being highly heritable and genetically stable throughout the life course. No robustly associated genetic loci or variants for childhood intelligence have been reported. Here, we report the first genome-wide association study (GWAS) on childhood intelligence (age range 6–18 years) from 17 989 individuals in six discovery and three replication samples. Although no individual single-nucleotide polymorphisms (SNPs) were detected with genome-wide significance, we show that the aggregate effects of common SNPs explain 22–46% of phenotypic variation in childhood intelligence in the three largest cohorts (P=3.9 × 10−15, 0.014 and 0.028). FNBP1L, previously reported to be the most significantly associated gene for adult intelligence, was also significantly associated with childhood intelligence (P=0.003). Polygenic prediction analyses resulted in a significant correlation between predictor and outcome in all replication cohorts. The proportion of childhood intelligence explained by the predictor reached 1.2% (P=6 × 10−5), 3.5% (P=10−3) and 0.5% (P=6 × 10−5) in three independent validation cohorts. Given the sample sizes, these genetic prediction results are consistent with expectations if the genetic architecture of childhood intelligence is like that of body mass index or height. Our study provides molecular support for the heritability and polygenic nature of childhood intelligence. Larger sample sizes will be required to detect individual variants with genome-wide significance.
  • Berghuis, B., De Kovel, C. G. F., van Iterson, L., Lamberts, R. J., Sander, J. W., Lindhout, D., & Koeleman, B. P. C. (2015). Complex SCN8A DNA-abnormalities in an individual with therapy resistant absence epilepsy. Epilepsy Research, 115, 141-144. doi:10.1016/j.eplepsyres.2015.06.007.

    Abstract

    Background De novo SCN8A missense mutations have been identified as a rare dominant cause of epileptic encephalopathy. We described a person with epileptic encephalopathy associated with a mosaic deletion of the SCN8A gene. Methods Array comparative genome hybridization was used to identify chromosomal abnormalities. Next Generation Sequencing was used to screen for variants in known and candidate epilepsy genes. A single nucleotide polymorphism array was used to test whether the SCN8A variants were in cis or in trans. Results We identified a de novo mosaic deletion of exons 2–14 of SCN8A, and a rare maternally inherited missense variant on the other allele in a woman presenting with absence seizures, challenging behavior, intellectual disability and QRS-fragmentation on the ECG. We also found a variant in SCN5A. Conclusions The combination of a rare missense variant with a de novo mosaic deletion of a large part of the SCN8A gene suggests that other possible mechanisms for SCN8A mutations may cause epilepsy; loss of function, genetic modifiers and cellular interference may play a role. This case expands the phenotype associated with SCN8A mutations, with absence epilepsy and regression in language and memory skills.
  • Bergmann, C., Ten Bosch, L., & Boves, L. (2014). A computational model of the headturn preference procedure: Design, challenges, and insights. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes (pp. 125-136). World Scientific. doi:10.1142/9789814458849_0010.

    Abstract

    The Headturn Preference Procedure (HPP) is a frequently used method (e.g., Jusczyk & Aslin; and subsequent studies) to investigate linguistic abilities in infants. In this paradigm infants are usually first familiarised with words and then tested for a listening preference for passages containing those words in comparison to unrelated passages. Listening preference is defined as the time an infant spends attending to those passages with his or her head turned towards a flashing light and the speech stimuli. The knowledge and abilities inferred from the results of HPP studies have been used to reason about and formally model early linguistic skills and language acquisition. However, the actual cause of infants' behaviour in HPP experiments has been subject to numerous assumptions as there are no means to directly tap into cognitive processes. To make these assumptions explicit, and more crucially, to understand how infants' behaviour emerges if only general learning mechanisms are assumed, we introduce a computational model of the HPP. Simulations with the computational HPP model show that the difference in infant behaviour between familiarised and unfamiliar words in passages can be explained by a general learning mechanism and that many assumptions underlying the HPP are not necessarily warranted. We discuss the implications for conventional interpretations of the outcomes of HPP experiments.
  • Bergmann, C., Ten Bosch, L., Fikkert, P., & Boves, L. (2013). A computational model to investigate assumptions in the headturn preference procedure. Frontiers in Psychology, 4: 676. doi:10.3389/fpsyg.2013.00676.

    Abstract

    In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP): (1) behavioral differences originate in different processing; (2) processing involves some form of recognition; (3) words are segmented from connected speech; and (4) differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a) the specific voices used in the two parts on HPP experiments (familiarization and test) and (b) the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximize cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviors observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP. - See more at: http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00676/full#sthash.TUEwObRb.dpuf
  • Bergmann, C. (2014). Computational models of early language acquisition and the role of different voices. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bergmann, C., Bosch, L. t., Fikkert, P., & Boves, L. (2015). Modelling the Noise-Robustness of Infants’ Word Representations: The Impact of Previous Experience. PLoS One, 10(7): e0132245. doi:10.1371/journal.pone.0132245.

    Abstract

    During language acquisition, infants frequently encounter ambient noise. We present a computational model to address whether specific acoustic processing abilities are necessary to detect known words in moderate noise—an ability attested experimentally in infants. The model implements a general purpose speech encoding and word detection procedure. Importantly, the model contains no dedicated processes for removing or cancelling out ambient noise, and it can replicate the patterns of results obtained in several infant experiments. In addition to noise, we also addressed the role of previous experience with particular target words: does the frequency of a word matter, and does it play a role whether that word has been spoken by one or multiple speakers? The simulation results show that both factors affect noise robustness. We also investigated how robust word detection is to changes in speaker identity by comparing words spoken by known versus unknown speakers during the simulated test. This factor interacted with both noise level and past experience, showing that an increase in exposure is only helpful when a familiar speaker provides the test material. Added variability proved helpful only when encountering an unknown speaker. Finally, we addressed whether infants need to recognise specific words, or whether a more parsimonious explanation of infant behaviour, which we refer to as matching, is sufficient. Recognition involves a focus of attention on a specific target word, while matching only requires finding the best correspondence of acoustic input to a known pattern in the memory. Attending to a specific target word proves to be more noise robust, but a general word matching procedure can be sufficient to simulate experimental data stemming from young infants. A change from acoustic matching to targeted recognition provides an explanation of the improvements observed in infants around their first birthday. In summary, we present a computational model incorporating only the processes infants might employ when hearing words in noise. Our findings show that a parsimonious interpretation of behaviour is sufficient and we offer a formal account of emerging abilities.
  • Besharati, S., Forkel, S. J., Kopelman, M., Solms, M., Jenkinson, P. M., & Fotopoulou, A. (2014). The affective modulation of motor awareness in anosognosia for hemiplegia: Behavioural and lesion evidence. Cortex, 61, 127-140. doi:10.1016/j.cortex.2014.08.016.

    Abstract

    The possible role of emotion in anosognosia for hemiplegia (i.e., denial of motor deficits contralateral to a brain lesion), has long been debated between psychodynamic and neurocognitive theories. However, there are only a handful of case studies focussing on this topic, and the precise role of emotion in anosognosia for hemiplegia requires empirical investigation. In the present study, we aimed to investigate how negative and positive emotions influence motor awareness in anosognosia. Positive and negative emotions were induced under carefully-controlled experimental conditions in right-hemisphere stroke patients with anosognosia for hemiplegia (n = 11) and controls with clinically normal awareness (n = 10). Only the negative, emotion induction condition resulted in a significant improvement of motor awareness in anosognosic patients compared to controls; the positive emotion induction did not. Using lesion overlay and voxel-based lesion-symptom mapping approaches, we also investigated the brain lesions associated with the diagnosis of anosognosia, as well as with performance on the experimental task. Anatomical areas that are commonly damaged in AHP included the right-hemisphere motor and sensory cortices, the inferior frontal cortex, and the insula. Additionally, the insula, putamen and anterior periventricular white matter were associated with less awareness change following the negative emotion induction. This study suggests that motor unawareness and the observed lack of negative emotions about one's disabilities cannot be adequately explained by either purely motivational or neurocognitive accounts. Instead, we propose an integrative account in which insular and striatal lesions result in weak interoceptive and motivational signals. These deficits lead to faulty inferences about the self, involving a difficulty to personalise new sensorimotor information, and an abnormal adherence to premorbid beliefs about the body.

    Additional information

    supplementary file
  • Bidgood, A., Ambridge, B., Pine, J. M., & Rowland, C. F. (2014). The retreat from locative overgeneralisation errors: A novel verb grammaticality judgment study. PLoS One, 9(5): e97634. doi:10.1371/journal.pone.0097634.

    Abstract

    Whilst some locative verbs alternate between the ground- and figure-locative constructions (e.g. Lisa sprayed the flowers with water/Lisa sprayed water onto the flowers), others are restricted to one construction or the other (e.g. *Lisa filled water into the cup/*Lisa poured the cup with water). The present study investigated two proposals for how learners (aged 5–6, 9–10 and adults) acquire this restriction, using a novel-verb-learning grammaticality-judgment paradigm. In support of the semantic verb class hypothesis, participants in all age groups used the semantic properties of novel verbs to determine the locative constructions (ground/figure/both) in which they could and could not appear. In support of the frequency hypothesis, participants' tolerance of overgeneralisation errors decreased with each increasing level of verb frequency (novel/low/high). These results underline the need to develop an integrated account of the roles of semantics and frequency in the retreat from argument structure overgeneralisation.
  • Blackwell, N. L., Perlman, M., & Fox Tree, J. E. (2015). Quotation as a multimodal construction. Journal of Pragmatics, 81, 1-7. doi:10.1016/j.pragma.2015.03.004.

    Abstract

    Quotations are a means to report a broad range of events in addition to speech, and often involve both vocal and bodily demonstration. The present study examined the use of quotation to report a variety of multisensory events (i.e., containing salient visible and audible elements) as participants watched and then described a set of video clips including human speech and animal vocalizations. We examined the relationship between demonstrations conveyed through the vocal versus bodily modality, comparing them across four common quotation devices (be like, go, say, and zero quotatives), as well as across direct and non-direct quotations and retellings. We found that direct quotations involved high levels of both vocal and bodily demonstration, while non-direct quotations involved lower levels in both these channels. In addition, there was a strong positive correlation between vocal and bodily demonstration for direct quotation. This result supports a Multimodal Hypothesis where information from the two channels arises from one central concept.
  • Blasi, D. E., Christiansen, M. H., Wichmann, S., Hammarström, H., & Stadler, P. F. (2014). Sound symbolism and the origins of language. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Cornish (Eds.), The evolution of language: Proceedings of the 10th International Conference (EVOLANG 10) (pp. 391-392). Singapore: World Scientific.
  • Blythe, J. (2015). Other-initiated repair in Murrinh-Patha. Open Linguistics, 1, 283-308. doi:10.1515/opli-2015-0003.

    Abstract

    The range of linguistic structures and interactional practices associated with other-initiated repair (OIR) is surveyed for the Northern Australian language Murrinh-Patha. By drawing on a video corpus of informal Murrinh- Patha conversation, the OIR formats are compared in terms of their utility and versatility. Certain “restricted” formats have semantic properties that point to prior trouble source items. While these make the restricted repair initiators more specialised, the “open” formats are less well resourced semantically, which makes them more versatile. They tend to be used when the prior talk is potentially problematic in more ways than one. The open formats (especially thangku, “what?”) tend to solicit repair operations on each potential source of trouble, such that the resultant repair solution improves upon the troublesource turn in several ways
  • Blythe, J. (2013). Preference organization driving structuration: Evidence from Australian Aboriginal interaction for pragmatically motivated grammaticalization. Language, 89(4), 883-919.
  • Bocanegra, B. R., Poletiek, F. H., & Zwaan, R. A. (2014). Asymmetrical feature binding across language and perception. In Proceedings of the 7th annual Conference on Embodied and Situated Language Processing (ESLP 2014).
  • Bock, K., Eberhard, K. M., Cutting, J. C., Meyer, A. S., & Schriefers, H. (2001). Some attractions of verb agreement. Cognitive Psychology, 43(2), 83-128. doi:10.1006/cogp.2001.0753.

    Abstract

    In English, words like scissors are grammatically plural but conceptually singular, while words like suds are both grammatically and conceptually plural. Words like army can be construed plurally, despite being grammatically singular. To explore whether and how congruence between grammatical and conceptual number affected the production of subject-verb number agreement in English, we elicited sentence completions for complex subject noun phrases like The advertisement for the scissors. In these phrases, singular subject nouns were followed by distractor words whose grammatical and conceptual numbers varied. The incidence of plural attraction (the use of plural verbs after plural distractors) increased only when distractors were grammatically plural, and revealed no influence from the distractors' number meanings. Companion experiments in Dutch offered converging support for this account and suggested that similar agreement processes operate in that language. The findings argue for a component of agreement that is sensitive primarily to the grammatical reflections of number. Together with other results, the evidence indicates that the implementation of agreement in languages like English and Dutch involves separable processes of number marking and number morphing, in which number meaning plays different parts.

    Files private

    Request files
  • Böckler, A., Hömke, P., & Sebanz, N. (2014). Invisible Man: Exclusion from shared attention affects gaze behavior and self-reports. Social Psychological and Personality Science, 5(2), 140-148. doi:10.1177/1948550613488951.

    Abstract

    Social exclusion results in lowered satisfaction of basic needs and shapes behavior in subsequent social situations. We investigated
    participants’ immediate behavioral response during exclusion from an interaction that consisted of establishing eye contact. A
    newly developed eye-tracker-based ‘‘looking game’’ was employed; participants exchanged looks with two virtual partners in an
    exchange where the player who had just been looked at chose whom to look at next. While some participants received as many
    looks as the virtual players (included), others were ignored after two initial looks (excluded). Excluded participants reported lower
    basic need satisfaction, lower evaluation of the interaction, and devaluated their interaction partners more than included
    participants, demonstrating that people are sensitive to epistemic ostracism. In line with William’s need-threat model,
    eye-tracking results revealed that excluded participants did not withdraw from the unfavorable interaction, but increased the
    number of looks to the player who could potentially reintegrate them.
  • De Boer, B., & Perlman, M. (2014). Physical mechanisms may be as important as brain mechanisms in evolution of speech [Commentary on Ackerman, Hage, & Ziegler. Brain Mechanisms of acoustic communication in humans and nonhuman primates: an evolutionary perspective]. Behavioral and Brain Sciences, 37(6), 552-553. doi:10.1017/S0140525X13004007.

    Abstract

    We present two arguments why physical adaptations for vocalization may be as important as neural adaptations. First, fine control over vocalization is not easy for physical reasons, and modern humans may be exceptional. Second, we present an example of a gorilla that shows rudimentary voluntary control over vocalization, indicating that some neural control is already shared with great apes.
  • De Boer, M., Toni, I., & Willems, R. M. (2013). What drives successful verbal communication? Frontiers in Human Neuroscience, 7: 622. doi:10.3389/fnhum.2013.00622.

    Abstract

    There is a vast amount of potential mappings between behaviors and intentions in communication: a behavior can indicate a multitude of different intentions, and the same intention can be communicated with a variety of behaviors. Humans routinely solve these many-to-many referential problems when producing utterances for an Addressee. This ability might rely on social cognitive skills, for instance, the ability to manipulate unobservable summary variables to disambiguate ambiguous behavior of other agents (“mentalizing”) and the drive to invest resources into changing and understanding the mental state of other agents (“communicative motivation”). Alternatively, the ambiguities of verbal communicative interactions might be solved by general-purpose cognitive abilities that process cues that are incidentally associated with the communicative interaction. In this study, we assess these possibilities by testing which cognitive traits account for communicative success during a verbal referential task. Cognitive traits were assessed with psychometric scores quantifying motivation, mentalizing abilities, and general-purpose cognitive abilities, taxing abstract visuo-spatial abilities. Communicative abilities of participants were assessed by using an on-line interactive task that required a speaker to verbally convey a concept to an Addressee. The communicative success of the utterances was quantified by measuring how frequently a number of Evaluators would infer the correct concept. Speakers with high motivational and general-purpose cognitive abilities generated utterances that were more easily interpreted. These findings extend to the domain of verbal communication the notion that motivational and cognitive factors influence the human ability to rapidly converge on shared communicative innovations.
  • Boersma, M., Kemner, C., de Reus, M. A., Collin, G., Snijders, T. M., Hofman, D., Buitelaar, J. K., Stam, C. J., & van den Heuvel, M. P. (2013). Disrupted functional brain networks in autistic toddlers. Brain Connectivity, 3(1), 41-49. doi:10.1089/brain.2012.0127.

    Abstract

    Communication and integration of information between brain regions plays a key role in healthy brain function. Conversely, disruption in brain communication may lead to cognitive and behavioral problems. Autism is a neurodevelopmental disorder that is characterized by impaired social interactions and aberrant basic information processing. Aberrant brain connectivity patterns have indeed been hypothesized to be a key neural underpinning of autism. In this study, graph analytical tools are used to explore the possible deviant functional brain network organization in autism at a very early stage of brain development. Electroencephalography (EEG) recordings in 12 toddlers with autism (mean age 3.5 years) and 19 control subjects were used to assess interregional functional brain connectivity, with functional brain networks constructed at the level of temporal synchronization between brain regions underlying the EEG electrodes. Children with autism showed a significantly increased normalized path length and reduced normalized clustering, suggesting a reduced global communication capacity already during early brain development. In addition, whole brain connectivity was found to be significantly reduced in these young patients suggesting an overall under-connectivity of functional brain networks in autism. Our findings support the hypothesis of abnormal neural communication in autism, with deviating effects already present at the early stages of brain development
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2015). Conversational interaction in the scanner: Mentalizing during language processing as revealed by MEG. Cerebral Cortex, 25(9), 3219-3234. doi:10.1093/cercor/bhu116.

    Abstract

    Humans are especially good at taking another’s perspective — representing what others might be thinking or experiencing. This “mentalizing” capacity is apparent in everyday human interactions and conversations. We investigated its neural basis using magnetoencephalography. We focused on whether mentalizing was engaged spontaneously and routinely to understand an utterance’s meaning or largely on-demand, to restore "common ground" when expectations were violated. Participants conversed with 1 of 2 confederate speakers and established tacit agreements about objects’ names. In a subsequent “test” phase, some of these agreements were violated by either the same or a different speaker. Our analysis of the neural processing of test phase utterances revealed recruitment of neural circuits associated with language (temporal cortex), episodic memory (e.g., medial temporal lobe), and mentalizing (temporo-parietal junction and ventro-medial prefrontal cortex). Theta oscillations (3 - 7 Hz) were modulated most prominently, and we observed phase coupling between functionally distinct neural circuits. The episodic memory and language circuits were recruited in anticipation of upcoming referring expressions, suggesting that context-sensitive predictions were spontaneously generated. In contrast, the mentalizing areas were recruited on-demand, as a means for detecting and resolving perceived pragmatic anomalies, with little evidence they were activated to make partner-specific predictions about upcoming linguistic utterances.
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2013). "Are we still talking about the same thing?" MEG reveals perspective-taking in response to pragmatic violations, but not in anticipation. In M. Knauff, N. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 215-220). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0066/index.html.

    Abstract

    The current study investigates whether mentalizing, or taking the perspective of your interlocutor, plays an essential role throughout a conversation or whether it is mostly used in reaction to misunderstandings. This study is the first to use a brain-imaging method, MEG, to answer this question. In a first phase of the experiment, MEG participants interacted "live" with a confederate who set naming precedents for certain pictures. In a later phase, these precedents were sometimes broken by a speaker who named the same picture in a different way. This could be done by the same speaker, who set the precedent, or by a different speaker. Source analysis of MEG data showed that in the 800 ms before the naming, when the picture was already on the screen, episodic memory and language areas were activated, but no mentalizing areas, suggesting that the speaker's naming intentions were not anticipated by the listener on the basis of shared experiences. Mentalizing areas only became activated after the same speaker had broken a precedent, which we interpret as a reaction to the violation of conversational pragmatics.
  • Bögels, S., & Torreira, F. (2015). Listeners use intonational phrase boundaries to project turn ends in spoken interaction. Journal of phonetics, 52, 46-57. doi:10.1016/j.wocn.2015.04.004.

    Abstract

    In conversation, turn transitions between speakers often occur smoothly, usually within a time window of a few hundred milliseconds. It has been argued, on the basis of a button-press experiment [De Ruiter, J. P., Mitterer, H., & Enfield, N. J. (2006). Projecting the end of a speaker's turn: A cognitive cornerstone of conversation. Language, 82(3):515–535], that participants in conversation rely mainly on lexico-syntactic information when timing and producing their turns, and that they do not need to make use of intonational cues to achieve smooth transitions and avoid overlaps. In contrast to this view, but in line with previous observational studies, our results from a dialogue task and a button-press task involving questions and answers indicate that the identification of the end of intonational phrases is necessary for smooth turn-taking. In both tasks, participants never responded to questions (i.e., gave an answer or pressed a button to indicate a turn end) at turn-internal points of syntactic completion in the absence of an intonational phrase boundary. Moreover, in the button-press task, they often pressed the button at the same point of syntactic completion when the final word of an intonational phrase was cross-spliced at that location. Furthermore, truncated stimuli ending in a syntactic completion point but lacking an intonational phrase boundary led to significantly delayed button presses. In light of these results, we argue that earlier claims that intonation is not necessary for correct turn-end projection are misguided, and that research on turn-taking should continue to consider intonation as a source of turn-end cues along with other linguistic and communicative phenomena.
  • Bögels, S., Magyari, L., & Levinson, S. C. (2015). Neural signatures of response planning occur midway through an incoming question in conversation. Scientific Reports, 5: 12881. doi:10.1038/srep12881.

    Abstract

    A striking puzzle about language use in everyday conversation is that turn-taking latencies are usually very short, whereas planning language production takes much longer. This implies overlap between language comprehension and production processes, but the nature and extent of such overlap has never been studied directly. Combining an interactive quiz paradigm with EEG measurements in an innovative way, we show that production planning processes start as soon as possible, that is, within half a second after the answer to a question can be retrieved (up to several seconds before the end of the question). Localization of ERP data shows early activation even of brain areas related to late stages of production planning (e.g., syllabification). Finally, oscillation results suggest an attention switch from comprehension to production around the same time frame. This perspective from interactive language use throws new light on the performance characteristics that language competence involves.
  • Bögels, S., Kendrick, K. H., & Levinson, S. C. (2015). Never say no… How the brain interprets the pregnant pause in conversation. PLoS One, 10(12): e0145474. doi:10.1371/journal.pone.0145474.

    Abstract

    In conversation, negative responses to invitations, requests, offers, and the like are more likely to occur with a delay – conversation analysts talk of them as dispreferred. Here we examine the contrastive cognitive load ‘yes’ and ‘no’ responses make, either when relatively fast (300 ms after question offset) or delayed (1000 ms). Participants heard short dialogues contrasting in speed and valence of response while having their EEG recorded. We found that a fast ‘no’ evokes an N400-effect relative to a fast ‘yes’; however this contrast disappeared in the delayed responses. 'No' responses however elicited a late frontal positivity both if they were fast and if they were delayed. We interpret these results as follows: a fast ‘no’ evoked an N400 because an immediate response is expected to be positive – this effect disappears as the response time lengthens because now in ordinary conversation the probability of a ‘no’ has increased. However, regardless of the latency of response, a ‘no’ response is associated with a late positivity, since a negative response is always dispreferred. Together these results show that negative responses to social actions exact a higher cognitive load, but especially when least expected, in immediate response.

    Additional information

    Data availability
  • Bögels, S., Schriefers, H., Vonk, W., Chwilla, D., & Kerkhofs, R. (2013). Processing consequences of superfluous and missing prosodic breaks in auditory sentence comprehension. Neuropsychologia, 51, 2715-2728. doi:10.1016/j.neuropsychologia.2013.09.008.

    Abstract

    This ERP study investigates whether a superfluous prosodic break (i.e., a prosodic break that does not coincide with a syntactic break) has more severe processing consequences during auditory sentence comprehension than a missing prosodic break (i.e., the absence of a prosodic break at the position of a syntactic break). Participants listened to temporarily ambiguous sentences involving a prosody-syntax match or mismatch. The disambiguation of these sentences was always lexical in nature in the present experiment. This contrasts with a related study by Pauker, Itzhak, Baum, and Steinhauer (2011), where the disambiguation was of a lexical type for missing PBs and of a prosodic type for superfluous PBs. Our results converge with those of Pauker et al.: superfluous prosodic breaks lead to more severe processing problems than missing prosodic breaks. Importantly, the present results extend those of Pauker et al. showing that this holds when the disambiguation is always lexical in nature. Furthermore, our results show that the way listeners use prosody can change over the course of the experiment which bears consequences for future studies.
  • Bohnemeyer, J. (2001). Motionland films version 2: Referential communication task with motionland stimulus. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 97-99). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874623.

    Abstract

    How do languages express ideas of movement, and how do they package different components of moving, such as manner and path? This task supports detailed investigation of motion descriptions. The specific study goals are: (a) the coding of “via” grounds (i.e., ground objects which the figure moves along, over, around, through, past, etc.); (b) the coding of direction changes; (c) the spontaneous segmentation of complex motion scenarios; and (d) the gestural representation of motion paths. The stimulus set is 5 simple 3D animations (7-17 seconds long) that show a ball rolling through a landscape. The task is a director-matcher task for two participants. The director describes the path of the ball in each clip to the matcher, who is asked to trace the path with a pen in a 2D picture.

    Additional information

    2001_Motionland_films_v2.zip
  • Bohnemeyer, J., Eisenbeiss, S., & Narasimhan, B. (2001). Event triads. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 100-114). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874630.

    Abstract

    Judgments we make about how similar or different events are to each other can reveal the features we find useful in classifying the world. This task is designed to investigate how speakers of different languages classify events, and to examine how linguistic and gestural encoding relates to non-linguistic classification. Specifically, the task investigates whether speakers judge two events to be similar on the basis of (a) the path versus manner of motion, (b) sub-events versus larger complex events, (c) participant identity versus event identity, and (d) different participant roles. In the task, participants are asked to make similarity judgments concerning sets of 2D animation clips.
  • Bohnemeyer, J. (2001). A questionnaire on event integration. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 177-184). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Bohnemeyer, J., Bowerman, M., & Brown, P. (2001). Cut and break clips. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 90-96). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874626.

    Abstract

    How do different languages treat a particular semantic domain? It has already been established that languages have widely varied words for talking about “cutting” and “breaking” things: for example, English has a very general verb break, but K’iche’ Maya has many different ‘break’ verbs that are used for different kinds of objects (e.g., brittle, flexible, long). The aim of this task is to map out cross-linguistic lexicalisation patterns in the cutting/breaking domain. The stimuli comprise 61 short video clips that show one or two actors breaking various objects (sticks, carrots, pieces of cloth or string, etc.) using various instruments (a knife, a hammer, an axe, their hands, etc.), or situations in which various kinds of objects break spontaneously. The clips are used to elicit descriptions of actors’ actions and the state changes that the objects undergo.

    Additional information

    2001_Cut_and_break_clips.zip
  • Bohnemeyer, J. (2001). Toponym questionnaire. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 55-61). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874620.

    Abstract

    Place-names (toponyms) are at the intersection of spatial language, culture, and cognition. This questionnaire prepares the researcher to answer three overarching questions: how to formally identify place-names in the research language (i.e. according to morphological and syntactic criteria); what places place-names are employed to refer to (e.g. human settlements, landscape sites); and how places are semantically construed for this purpose. The questionnaire can in principle be answered using an existing database. However, additional elicitation with language consultants is recommended.
  • Bolton, J. L., Hayward, C., Direk, N., Lewis, J. G., Hammond, G. L., Hill, L. A., Anderson, A., Huffman, J., Wilson, J. F., Campbell, H., Rudan, I., Wright, A., Hastie, N., Wild, S. H., Velders, F. P., Hofman, A., Uitterlinden, A. G., Lahti, J., Räikkönen, K., Kajantie, E. and 37 moreBolton, J. L., Hayward, C., Direk, N., Lewis, J. G., Hammond, G. L., Hill, L. A., Anderson, A., Huffman, J., Wilson, J. F., Campbell, H., Rudan, I., Wright, A., Hastie, N., Wild, S. H., Velders, F. P., Hofman, A., Uitterlinden, A. G., Lahti, J., Räikkönen, K., Kajantie, E., Widen, E., Palotie, A., Eriksson, J. G., Kaakinen, M., Järvelin, M.-R., Timpson, N. J., Davey Smith, G., Ring, S. M., Evans, D. M., St Pourcain, B., Tanaka, T., Milaneschi, Y., Bandinelli, S., Ferrucci, L., van der Harst, P., Rosmalen, J. G. M., Bakker, S. J. L., Verweij, N., Dullaart, R. P. F., Mahajan, A., Lindgren, C. M., Morris, A., Lind, L., Ingelsson, E., Anderson, L. N., Pennell, C. E., Lye, S. J., Matthews, S. G., Eriksson, J., Mellstrom, D., Ohlsson, C., Price, J. F., Strachan, M. W. J., Reynolds, R. M., Tiemeier, H., Walker, B. R., & CORtisol NETwork (CORNET) Consortium (2014). Genome Wide Association Identifies Common Variants at the SERPINA6/SERPINA1 Locus Influencing Plasma Cortisol and Corticosteroid Binding Globulin. PLoS Genetics, 10(7): e1004474. doi:10.1371/journal.pgen.1004474.

    Abstract

    Variation in plasma levels of cortisol, an essential hormone in the stress response, is associated in population-based studies with cardio-metabolic, inflammatory and neuro-cognitive traits and diseases. Heritability of plasma cortisol is estimated at 30-60% but no common genetic contribution has been identified. The CORtisol NETwork (CORNET) consortium undertook genome wide association meta-analysis for plasma cortisol in 12,597 Caucasian participants, replicated in 2,795 participants. The results indicate that <1% of variance in plasma cortisol is accounted for by genetic variation in a single region of chromosome 14. This locus spans SERPINA6, encoding corticosteroid binding globulin (CBG, the major cortisol-binding protein in plasma), and SERPINA1, encoding α1-antitrypsin (which inhibits cleavage of the reactive centre loop that releases cortisol from CBG). Three partially independent signals were identified within the region, represented by common SNPs; detailed biochemical investigation in a nested sub-cohort showed all these SNPs were associated with variation in total cortisol binding activity in plasma, but some variants influenced total CBG concentrations while the top hit (rs12589136) influenced the immunoreactivity of the reactive centre loop of CBG. Exome chip and 1000 Genomes imputation analysis of this locus in the CROATIA-Korcula cohort identified missense mutations in SERPINA6 and SERPINA1 that did not account for the effects of common variants. These findings reveal a novel common genetic source of variation in binding of cortisol by CBG, and reinforce the key role of CBG in determining plasma cortisol levels. In turn this genetic variation may contribute to cortisol-associated degenerative diseases.
  • Bone, D., Ramanarayanan, V., Narayanan, S., Hoedemaker, R. S., & Gordon, P. C. (2013). Analyzing eye-voice coordination in rapid automatized naming. In F. Bimbot, C. Cerisara, G. Fougeron, L. Gravier, L. Lamel, F. Pelligrino, & P. Perrier (Eds.), INTERSPEECH-2013: 14thAnnual Conference of the International Speech Communication Association (pp. 2425-2429). ISCA Archive. Retrieved from http://www.isca-speech.org/archive/interspeech_2013/i13_2425.html.

    Abstract

    Rapid Automatized Naming (RAN) is a powerful tool for pre- dicting future reading skill. A person’s ability to quickly name symbols as they scan a table is related to higher-level reading proficiency in adults and is predictive of future literacy gains in children. However, noticeable differences are present in the strategies or patterns within groups having similar task comple- tion times. Thus, a further stratification of RAN dynamics may lead to better characterization and later intervention to support reading skill acquisition. In this work, we analyze the dynamics of the eyes, voice, and the coordination between the two during performance. It is shown that fast performers are more similar to each other than to slow performers in their patterns, but not vice versa. Further insights are provided about the patterns of more proficient subjects. For instance, fast performers tended to exhibit smoother behavior contours, suggesting a more sta- ble perception-production process.
  • Bønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E. and 47 moreBønnelykke, K., Matheson, M. C., Pers, T. H., Granell, R., Strachan, D. P., Alves, A. C., Linneberg, A., Curtin, J. A., Warrington, N. M., Standl, M., Kerkhof, M., Jonsdottir, I., Bukvic, B. K., Kaakinen, M., Sleimann, P., Thorleifsson, G., Thorsteinsdottir, U., Schramm, K., Baltic, S., Kreiner-Møller, E., Simpson, A., St Pourcain, B., Coin, L., Hui, J., Walters, E. H., Tiesler, C. M. T., Duffy, D. L., Jones, G., Ring, S. M., McArdle, W. L., Price, L., Robertson, C. F., Pekkanen, J., Tang, C. S., Thiering, E., Montgomery, G. W., Hartikainen, A.-L., Dharmage, S. C., Husemoen, L. L., Herder, C., Kemp, J. P., Elliot, P., James, A., Waldenberger, M., Abramson, M. J., Fairfax, B. P., Knight, J. C., Gupta, R., Thompson, P. J., Holt, P., Sly, P., Hirschhorn, J. N., Blekic, M., Weidinger, S., Hakonarsson, H., Stefansson, K., Heinrich, J., Postma, D. S., Custovic, A., Pennell, C. E., Jarvelin, M.-R., Koppelman, G. H., Timpson, N., Ferreira, M. A., Bisgaard, H., Henderson, A. J., Australian Asthma Genetics Consortium (AAGC), & EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium (2013). Meta-analysis of genome-wide association studies identifies ten loci influencing allergic sensitization. Nature Genetics, 45(8), 902-906. doi:10.1038/ng.2694.

    Abstract

    Allergen-specific immunoglobulin E (present in allergic sensitization) has a central role in the pathogenesis of allergic disease. We performed the first large-scale genome-wide association study (GWAS) of allergic sensitization in 5,789 affected individuals and 10,056 controls and followed up the top SNP at each of 26 loci in 6,114 affected individuals and 9,920 controls. We increased the number of susceptibility loci with genome-wide significant association with allergic sensitization from three to ten, including SNPs in or near TLR6, C11orf30, STAT6, SLC25A46, HLA-DQB1, IL1RL1, LPP, MYC, IL2 and HLA-B. All the top SNPs were associated with allergic symptoms in an independent study. Risk-associated variants at these ten loci were estimated to account for at least 25% of allergic sensitization and allergic rhinitis. Understanding the molecular mechanisms underlying these associations may provide new insights into the etiology of allergic disease.

Share this page