Publications

Displaying 1 - 100 of 735
  • Acerbi, A., Van Leeuwen, E. J. C., Haun, D. B. M., & Tennie, C. (2018). Reply to 'Sigmoidal acquisition curves are good indicators of conformist transmission'. Scientific Reports, 8(1): 14016. doi:10.1038/s41598-018-30382-0.

    Abstract

    In the Smaldino et al. study ‘Sigmoidal Acquisition Curves are Good Indicators of Conformist Transmission’, our original findings regarding the conditional validity of using population-level sigmoidal acquisition curves as means to evidence individual-level conformity are contested. We acknowledge the identification of useful nuances, yet conclude that our original findings remain relevant for the study of conformist learning mechanisms. Replying to: Smaldino, P. E., Aplin, L. M. & Farine, D. R. Sigmoidal Acquisition Curves Are Good Indicators of Conformist Transmission. Sci. Rep. 8, https://doi.org/10.1038/s41598-018-30248-5 (2018).
  • Acheson, D. J., Wells, J. B., & MacDonald, M. C. (2008). New and updated tests of print exposure and reading abilities in college students. Behavior Research Methods, 40(1), 278-289. doi:10.3758/BRM.40.1.278.

    Abstract

    The relationship between print exposure and measures of reading skill was examined in college students (N=99, 58 female; mean age=20.3 years). Print exposure was measured with several new self-reports of reading and writing habits, as well as updated versions of the Author Recognition Test and the Magazine Recognition Test (Stanovich & West, 1989). Participants completed a sentence comprehension task with syntactically complex sentences, and reading times and comprehension accuracy were measured. An additional measure of reading skill was provided by participants’ scores on the verbal portions of the ACT, a standardized achievement test. Higher levels of print exposure were associated with higher sentence processing abilities and superior verbal ACT performance. The relative merits of different print exposure assessments are discussed.
  • Acheson, D. J., & Hagoort, P. (2014). Twisting tongues to test for conflict monitoring in speech production. Frontiers in Human Neuroscience, 8: 206. doi:10.3389/fnhum.2014.00206.

    Abstract

    A number of recent studies have hypothesized that monitoring in speech production may occur via domain-general mechanisms responsible for the detection of response conflict. Outside of language, two ERP components have consistently been elicited in conflict-inducing tasks (e.g., the flanker task): the stimulus-locked N2 on correct trials, and the response-locked error-related negativity (ERN). The present investigation used these electrophysiological markers to test whether a common response conflict monitor is responsible for monitoring in speech and non-speech tasks. Electroencephalography (EEG) was recorded while participants performed a tongue twister (TT) task and a manual version of the flanker task. In the TT task, people rapidly read sequences of four nonwords arranged in TT and non-TT patterns three times. In the flanker task, people responded with a left/right button press to a center-facing arrow, and conflict was manipulated by the congruency of the flanking arrows. Behavioral results showed typical effects of both tasks, with increased error rates and slower speech onset times for TT relative to non-TT trials and for incongruent relative to congruent flanker trials. In the flanker task, stimulus-locked EEG analyses replicated previous results, with a larger N2 for incongruent relative to congruent trials, and a response-locked ERN. In the TT task, stimulus-locked analyses revealed broad, frontally-distributed differences beginning around 50 ms and lasting until just before speech initiation, with TT trials more negative than non-TT trials; response-locked analyses revealed an ERN. Correlation across these measures showed some correlations within a task, but little evidence of systematic cross-task correlation. Although the present results do not speak against conflict signals from the production system serving as cues to self-monitoring, they are not consistent with signatures of response conflict being mediated by a single, domain-general conflict monitor
  • Agus, T., Carrion Castillo, A., Pressnitzer, D., & Ramus, F. (2014). Perceptual learning of acoustic noise by individuals with dyslexia. Journal of Speech, Language, and Hearing Research., 57, 1069-1077. doi:10.1044/1092-4388(2013/13-0020).

    Abstract

    Purpose: A phonological deficit is thought to affect most individuals with developmental dyslexia. The present study addresses whether the phonological deficit is caused by difficulties with perceptual learning of fine acoustic details. Method: A demanding test of nonverbal auditory memory, “noise learning,” was administered to both adults with dyslexia and control adult participants. On each trial, listeners had to decide whether a stimulus was a 1-s noise token or 2 abutting presentations of the same 0.5-s noise token (repeated noise). Without the listener’s knowledge, the exact same noise tokens were presented over many trials. An improved ability to perform the task for such “reference” noises reflects learning of their acoustic details. Results: Listeners with dyslexia did not differ from controls in any aspect of the task, qualitatively or quantitatively. They required the same amount of training to achieve discrimination of repeated from nonrepeated noises, and they learned the reference noises as often and as rapidly as the control group. However, they did show all the hallmarks of dyslexia, including a well-characterized phonological deficit. Conclusion: The data did not support the hypothesis that deficits in basic auditory processing or nonverbal learning and memory are the cause of the phonological deficit in dyslexia
  • Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2014). Towards a Computational Model of Actor-Based Language Comprehension. Neuroinformatics, 12(1), 143-179. doi:10.1007/s12021-013-9198-x.

    Abstract

    Neurophysiological data from a range of typologically diverse languages provide evidence for a cross-linguistically valid, actor-based strategy of understanding sentence-level meaning. This strategy seeks to identify the participant primarily responsible for the state of affairs (the actor) as quickly and unambiguously as possible, thus resulting in competition for the actor role when there are multiple candidates. Due to its applicability across languages with vastly different characteristics, we have proposed that the actor strategy may derive from more basic cognitive or neurobiological organizational principles, though it is also shaped by distributional properties of the linguistic input (e.g. the morphosyntactic coding strategies for actors in a given language). Here, we describe an initial computational model of the actor strategy and how it interacts with language-specific properties. Specifically, we contrast two distance metrics derived from the output of the computational model (one weighted and one unweighted) as potential measures of the degree of competition for actorhood by testing how well they predict modulations of electrophysiological activity engendered by language processing. To this end, we present an EEG study on word order processing in German and use linear mixed-effects models to assess the effect of the various distance metrics. Our results show that a weighted metric, which takes into account the weighting of an actor-identifying feature in the language under consideration outperforms an unweighted distance measure. We conclude that actor competition effects cannot be reduced to feature overlap between multiple sentence participants and thereby to the notion of similarity-based interference, which is prominent in current memory-based models of language processing. Finally, we argue that, in addition to illuminating the underlying neurocognitive mechanisms of actor competition, the present model can form the basis for a more comprehensive, neurobiologically plausible computational model of constructing sentence-level meaning.
  • Alferink, I., & Gullberg, M. (2014). French-Dutch bilinguals do not maintain obligatory semantic distinctions: Evidence from placement verbs. Bilingualism: Language and Cognition, 17, 22-37. doi:10.1017/S136672891300028X.

    Abstract

    It is often said that bilinguals are not the sum of two monolinguals but that bilingual systems represent a third pattern. This study explores the exact nature of this pattern. We ask whether there is evidence of a merged system when one language makes an obligatory distinction that the other one does not, namely in the case of placement verbs in French and Dutch, and whether such a merged system is realised as a more general or a more specific system. The results show that in elicited descriptions Belgian French-Dutch bilinguals drop one of the categories in one of the languages, resulting in a more general semantic system in comparison with the non-contact variety. They do not uphold the obligatory distinction in the verb nor elsewhere despite its communicative relevance. This raises important questions regarding how widespread these differences are and what drives these patterns
  • Alhama, R. G., & Zuidema, W. (2018). Pre-Wiring and Pre-Training: What Does a Neural Network Need to Learn Truly General Identity Rules? Journal of Artificial Intelligence Research, 61, 927-946. doi:10.1613/jair.1.11197.

    Abstract

    In an influential paper (“Rule Learning by Seven-Month-Old Infants”), Marcus, Vijayan, Rao and Vishton claimed that connectionist models cannot account for human success at learning tasks that involved generalization of abstract knowledge such as grammatical rules. This claim triggered a heated debate, centered mostly around variants of the Simple Recurrent Network model. In our work, we revisit this unresolved debate and analyze the underlying issues from a different perspective. We argue that, in order to simulate human-like learning of grammatical rules, a neural network model should not be used as a tabula rasa, but rather, the initial wiring of the neural connections and the experience acquired prior to the actual task should be incorporated into the model. We present two methods that aim to provide such initial state: a manipulation of the initial connections of the network in a cognitively plausible manner (concretely, by implementing a “delay-line” memory), and a pre-training algorithm that incrementally challenges the network with novel stimuli. We implement such techniques in an Echo State Network (ESN), and we show that only when combining both techniques the ESN is able to learn truly general identity rules. Finally, we discuss the relation between these cognitively motivated techniques and recent advances in Deep Learning.
  • Ambridge, B., Pine, J. M., Rowland, C. F., Freudenthal, D., & Chang, F. (2014). Avoiding dative overgeneralisation errors: semantics, statistics or both? Language, Cognition and Neuroscience, 29(2), 218-243. doi:10.1080/01690965.2012.738300.

    Abstract

    How do children eventually come to avoid the production of overgeneralisation errors, in particular, those involving the dative (e.g., *I said her “no”)? The present study addressed this question by obtaining from adults and children (5–6, 9–10 years) judgements of well-formed and over-general datives with 301 different verbs (44 for children). A significant effect of pre-emption—whereby the use of a verb in the prepositional-object (PO)-dative construction constitutes evidence that double-object (DO)-dative uses are not permitted—was observed for every age group. A significant effect of entrenchment—whereby the use of a verb in any construction constitutes evidence that unattested dative uses are not permitted—was also observed for every age group, with both predictors also accounting for developmental change between ages 5–6 and 9–10 years. Adults demonstrated knowledge of a morphophonological constraint that prohibits Latinate verbs from appearing in the DO-dative construction (e.g., *I suggested her the trip). Verbs’ semantic properties (supplied by independent adult raters) explained additional variance for all groups and developmentally, with the relative influence of narrow- vs broad-range semantic properties increasing with age. We conclude by outlining an account of the formation and restriction of argument-structure generalisations designed to accommodate these findings.
  • Ambridge, B., Rowland, C. F., & Pine, J. M. (2008). Is structure dependence an innate constraint? New experimental evidence from children's complex-question production. Cognitive Science, 32(1), 222-255. doi:10.1080/03640210701703766.

    Abstract

    According to Crain and Nakayama (1987), when forming complex yes/no questions, children do not make errors such as Is the boy who smoking is crazy? because they have innate knowledge of structure dependence and so will not move the auxiliary from the relative clause. However, simple recurrent networks are also able to avoid such errors, on the basis of surface distributional properties of the input (Lewis & Elman, 2001; Reali & Christiansen, 2005). Two new elicited production studies revealed that (a) children occasionally produce structure-dependence errors and (b) the pattern of children's auxiliary-doubling errors (Is the boy who is smoking is crazy?) suggests a sensitivity to surface co-occurrence patterns in the input. This article concludes that current data do not provide any support for the claim that structure dependence is an innate constraint, and that it is possible that children form a structure-dependent grammar on the basis of exposure to input that exhibits this property.
  • Ambridge, B., Pine, J. M., Rowland, C. F., & Young, C. R. (2008). The effect of verb semantic class and verb frequency (entrenchment) on children’s and adults’ graded judgements of argument-structure overgeneralization errors. Cognition, 106(1), 87-129. doi:10.1016/j.cognition.2006.12.015.

    Abstract

    Participants (aged 5–6 yrs, 9–10 yrs and adults) rated (using a five-point scale) grammatical (intransitive) and overgeneralized (transitive causative)1 uses of a high frequency, low frequency and novel intransitive verb from each of three semantic classes [Pinker, S. (1989a). Learnability and cognition: the acquisition of argument structure. Cambridge, MA: MIT Press]: “directed motion” (fall, tumble), “going out of existence” (disappear, vanish) and “semivoluntary expression of emotion” (laugh, giggle). In support of Pinker’s semantic verb class hypothesis, participants’ preference for grammatical over overgeneralized uses of novel (and English) verbs increased between 5–6 yrs and 9–10 yrs, and was greatest for the latter class, which is associated with the lowest degree of direct external causation (the prototypical meaning of the transitive causative construction). In support of Braine and Brooks’s [Braine, M.D.S., & Brooks, P.J. (1995). Verb argument strucure and the problem of avoiding an overgeneral grammar. In M. Tomasello & W. E. Merriman (Eds.), Beyond names for things: Young children’s acquisition of verbs (pp. 352–376). Hillsdale, NJ: Erlbaum] entrenchment hypothesis, all participants showed the greatest preference for grammatical over ungrammatical uses of high frequency verbs, with this preference smaller for low frequency verbs, and smaller again for novel verbs. We conclude that both the formation of semantic verb classes and entrenchment play a role in children’s retreat from argument-structure overgeneralization errors.
  • Araújo, S., Faísca, L., Bramão, I., Petersson, K. M., & Reis, A. (2014). Lexical and phonological processes in dyslexic readers: Evidences from a visual lexical decision task. Dyslexia, 20, 38-53. doi:10.1002/dys.1461.

    Abstract

    The aim of the present study was to investigate whether reading failure in the context of an orthography of intermediate consistency is linked to inefficient use of the lexical orthographic reading procedure. The performance of typically developing and dyslexic Portuguese-speaking children was examined in a lexical decision task, where the stimulus lexicality, word frequency and length were manipulated. Both lexicality and length effects were larger in the dyslexic group than in controls, although the interaction between group and frequency disappeared when the data were transformed to control for general performance factors. Children with dyslexia were influenced in lexical decision making by the stimulus length of words and pseudowords, whereas age-matched controls were influenced by the length of pseudowords only. These findings suggest that non-impaired readers rely mainly on lexical orthographic information, but children with dyslexia preferentially use the phonological decoding procedure—albeit poorly—most likely because they struggle to process orthographic inputs as a whole such as controls do. Accordingly, dyslexic children showed significantly poorer performance than controls for all types of stimuli, including words that could be considered over-learned, such as high-frequency words. This suggests that their orthographic lexical entries are less established in the orthographic lexicon
  • Arshamian, A., Iravani, B., Majid, A., & Lundström, J. N. (2018). Respiration modulates olfactory memory consolidation in humans. The Journal of Neuroscience, 38(48), 10286-10294. doi:10.1523/JNEUROSCI.3360-17.2018.

    Abstract

    In mammals, respiratory-locked hippocampal rhythms are implicated in the scaffolding and transfer of information between sensory and memory networks. These oscillations are entrained by nasal respiration and driven by the olfactory bulb. They then travel to the piriform cortex where they propagate further downstream to the hippocampus and modulate neural processes critical for memory formation. In humans, bypassing nasal airflow through mouth-breathing abolishes these rhythms and impacts encoding as well as recognition processes thereby reducing memory performance. It has been hypothesized that similar behavior should be observed for the consolidation process, the stage between encoding and recognition, were memory is reactivated and strengthened. However, direct evidence for such an effect is lacking in human and non-human animals. Here we tested this hypothesis by examining the effect of respiration on consolidation of episodic odor memory. In two separate sessions, female and male participants encoded odors followed by a one hour awake resting consolidation phase where they either breathed solely through their nose or mouth. Immediately after the consolidation phase, memory for odors was tested. Recognition memory significantly increased during nasal respiration compared to mouth respiration during consolidation. These results provide the first evidence that respiration directly impacts consolidation of episodic events, and lends further support to the notion that core cognitive functions are modulated by the respiratory cycle.
  • Ashby, J., & Martin, A. E. (2008). Prosodic phonological representations early in visual word recognition. Journal of Experimental Psychology: Human Perception and Performance, 34(1), 224-236. doi:10.1037/0096-1523.34.1.224.

    Abstract

    Two experiments examined the nature of the phonological representations used during visual word recognition. We tested whether a minimality constraint (R. Frost, 1998) limits the complexity of early representations to a simple string of phonemes. Alternatively, readers might activate elaborated representations that include prosodic syllable information before lexical access. In a modified lexical decision task (Experiment 1), words were preceded by parafoveal previews that were congruent with a target's initial syllable as well as previews that contained 1 letter more or less than the initial syllable. Lexical decision times were faster in the syllable congruent conditions than in the incongruent conditions. In Experiment 2, we recorded brain electrical potentials (electroencephalograms) during single word reading in a masked priming paradigm. The event-related potential waveform elicited in the syllable congruent condition was more positive 250-350 ms posttarget compared with the waveform elicited in the syllable incongruent condition. In combination, these experiments demonstrate that readers process prosodic syllable information early in visual word recognition in English. They offer further evidence that skilled readers routinely activate elaborated, speechlike phonological representations during silent reading. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
  • Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4), 390-412. doi:10.1016/j.jml.2007.12.005.

    Abstract

    This paper provides an introduction to mixed-effects models for the analysis of repeated measurement data with subjects and items as crossed random effects. A worked-out example of how to use recent software for mixed-effects modeling is provided. Simulation studies illustrate the advantages offered by mixed-effects analyses compared to traditional analyses based on quasi-F tests, by-subjects analyses, combined by-subjects and by-items analyses, and random regression. Applications and possibilities across a range of domains of inquiry are discussed.
  • Baggio, G., Van Lambalgen, M., & Hagoort, P. (2008). Computing and recomputing discourse models: An ERP study. Journal of Memory and Language, 59, 36-53. doi:10.1016/j.jml.2008.02.005.

    Abstract

    While syntactic reanalysis has been extensively investigated in psycholinguistics, comparatively little is known about reanalysis in the semantic domain. We used event-related brain potentials (ERPs) to keep track of semantic processes involved in understanding short narratives such as ‘The girl was writing a letter when her friend spilled coffee on the paper’. We hypothesize that these sentences are interpreted in two steps: (1) when the progressive clause is processed, a discourse model is computed in which the goal state (a complete letter) is predicted to hold; (2) when the subordinate clause is processed, the initial representation is recomputed to the effect that, in the final discourse structure, the goal state is not satisfied. Critical sentences evoked larger sustained anterior negativities (SANs) compared to controls, starting around 400 ms following the onset of the sentence-final word, and lasting for about 400 ms. The amplitude of the SAN was correlated with the frequency with which participants, in an offline probe-selection task, responded that the goal state was not attained. Our results raise the possibility that the brain supports some form of non-monotonic recomputation to integrate information which invalidates previously held assumptions.
  • Bai, C., Bornkessel-Schlesewsky, I., Wang, L., Hung, Y.-C., Schlesewsky, M., & Burkhardt, P. (2008). Semantic composition engenders an N400: Evidence from Chinese compounds. NeuroReport, 19(6), 695-699. doi:10.1097/WNR.0b013e3282fc1eb7.

    Abstract

    This study provides evidence for the role of semantic composition in compound word processing. We examined the online processing of isolated two meaning unit compounds in Chinese, a language that uses compounding to ‘disambiguate’ meaning. Using auditory presentation, we manipulated the semantic meaning and syntactic category of the two meaning units forming a compound. Event-related brain potential-recordings revealed a significant influence of semantic information, which was reflected in an N400 signature for compounds whose meaning differed from the constituent meanings. This finding suggests that the combination of distinct constituent meanings to form an overall compound meaning consumes processing resources. By contrast, no comparable difference was observed based on syntactic category information. Our findings indicate that combinatory semantic processing at the word level correlates with N400 effects.
  • Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. (2014). Competition from unseen or unheard novel words: Lexical consolidation across modalities. Journal of Memory and Language, 73, 116-139. doi:10.1016/j.jml.2014.03.002.

    Abstract

    In four experiments we investigated the formation of novel word memories across modalities, using competition between novel words and their existing phonological/orthographic neighbours as a test of lexical integration. Auditorily acquired novel words entered into competition both in the spoken modality (Experiment 1) and in the written modality (Experiment 4) after a consolidation period of 24 h. Words acquired from print, on the other hand, showed competition effects after 24 h in a visual word recognition task (Experiment 3) but required additional training and a consolidation period of a week before entering into spoken-word competition (Experiment 2). These cross-modal effects support the hypothesis that lexicalised rather than episodic representations underlie post-consolidation competition effects. We suggest that sublexical phoneme–grapheme conversion during novel word encoding and/or offline consolidation enables the formation of modality-specific lexemes in the untrained modality, which subsequently undergo the same cortical integration process as explicitly perceived word forms in the trained modality. Although conversion takes place in both directions, speech input showed an advantage over print both in terms of lexicalisation and explicit memory performance. In conclusion, the brain is able to integrate and consolidate internally generated lexical information as well as external perceptual input.
  • Bakker-Marshall, I., Takashima, A., Schoffelen, J.-M., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2018). Theta-band Oscillations in the Middle Temporal Gyrus Reflect Novel Word Consolidation. Journal of Cognitive Neuroscience, 30(5), 621-633. doi:10.1162/jocn_a_01240.

    Abstract

    Like many other types of memory formation, novel word learning benefits from an offline consolidation period after the initial encoding phase. A previous EEG study has shown that retrieval of novel words elicited more word-like-induced electrophysiological brain activity in the theta band after consolidation [Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of Cognitive Neuroscience, 27, 1286–1297, 2015]. This suggests that theta-band oscillations play a role in lexicalization, but it has not been demonstrated that this effect is directly caused by the formation of lexical representations. This study used magnetoencephalography to localize the theta consolidation effect to the left posterior middle temporal gyrus (pMTG), a region known to be involved in lexical storage. Both untrained novel words and words learned immediately before test elicited lower theta power during retrieval than existing words in this region. After a 24-hr consolidation period, the difference between novel and existing words decreased significantly, most strongly in the left pMTG. The magnitude of the decrease after consolidation correlated with an increase in behavioral competition effects between novel words and existing words with similar spelling, reflecting functional integration into the mental lexicon. These results thus provide new evidence that consolidation aids the development of lexical representations mediated by the left pMTG. Theta synchronization may enable lexical access by facilitating the simultaneous activation of distributed semantic, phonological, and orthographic representations that are bound together in the pMTG.
  • Barendse, M. T., Albers, C. J., Oort, F. J., & Timmerman, M. E. (2014). Measurement bias detection through Bayesian factor analysis. Frontiers in Psychology, 5: 1087. doi:10.3389/fpsyg.2014.01087.

    Abstract

    Measurement bias has been defined as a violation of measurement invariance. Potential violators—variables that possibly violate measurement invariance—can be investigated through restricted factor analysis (RFA). The purpose of the present paper is to investigate a Bayesian approach to estimate RFA models with interaction effects, in order to detect uniform and nonuniform measurement bias. Because modeling nonuniform bias requires an interaction term, it is more complicated than modeling uniform bias. The Bayesian approach seems especially suited for such complex models. In a simulation study we vary the type of bias (uniform, nonuniform), the type of violator (observed continuous, observed dichotomous, latent continuous), and the correlation between the trait and the violator (0.0, 0.5). For each condition, 100 sets of data are generated and analyzed. We examine the accuracy of the parameter estimates and the performance of two bias detection procedures, based on the DIC fit statistic, in Bayesian RFA. Results show that the accuracy of the estimated parameters is satisfactory. Bias detection rates are high in all conditions with an observed violator, and still satisfactory in all other conditions.
  • Baron-Cohen, S., Murphy, L., Chakrabarti, B., Craig, I., Mallya, U., Lakatosova, S., Rehnstrom, K., Peltonen, L., Wheelwright, S., Allison, C., Fisher, S. E., & Warrier, V. (2014). A genome wide association study of mathematical ability reveals an association at chromosome 3q29, a locus associated with autism and learning difficulties: A preliminary study. PLoS One, 9(5): e96374. doi:10.1371/journal.pone.0096374.

    Abstract

    Mathematical ability is heritable, but few studies have directly investigated its molecular genetic basis. Here we aimed to identify specific genetic contributions to variation in mathematical ability. We carried out a genome wide association scan using pooled DNA in two groups of U.K. samples, based on end of secondary/high school national academic exam achievement: high (n = 419) versus low (n = 183) mathematical ability while controlling for their verbal ability. Significant differences in allele frequencies between these groups were searched for in 906,600 SNPs using the Affymetrix GeneChip Human Mapping version 6.0 array. After meeting a threshold of p<1.5×10−5, 12 SNPs from the pooled association analysis were individually genotyped in 542 of the participants and analyzed to validate the initial associations (lowest p-value 1.14 ×10−6). In this analysis, one of the SNPs (rs789859) showed significant association after Bonferroni correction, and four (rs10873824, rs4144887, rs12130910 rs2809115) were nominally significant (lowest p-value 3.278 × 10−4). Three of the SNPs of interest are located within, or near to, known genes (FAM43A, SFT2D1, C14orf64). The SNP that showed the strongest association, rs789859, is located in a region on chromosome 3q29 that has been previously linked to learning difficulties and autism. rs789859 lies 1.3 kbp downstream of LSG1, and 700 bp upstream of FAM43A, mapping within the potential promoter/regulatory region of the latter. To our knowledge, this is only the second study to investigate the association of genetic variants with mathematical ability, and it highlights a number of interesting markers for future study.
  • Basnakova, J., Weber, K., Petersson, K. M., Van Berkum, J. J. A., & Hagoort, P. (2014). Beyond the language given: The neural correlates of inferring speaker meaning. Cerebral Cortex, 24(10), 2572-2578. doi:10.1093/cercor/bht112.

    Abstract

    Even though language allows us to say exactly what we mean, we often use language to say things indirectly, in a way that depends on the specific communicative context. For example, we can use an apparently straightforward sentence like "It is hard to give a good presentation" to convey deeper meanings, like "Your talk was a mess!" One of the big puzzles in language science is how listeners work out what speakers really mean, which is a skill absolutely central to communication. However, most neuroimaging studies of language comprehension have focused on the arguably much simpler, context-independent process of understanding direct utterances. To examine the neural systems involved in getting at contextually constrained indirect meaning, we used functional magnetic resonance imaging as people listened to indirect replies in spoken dialog. Relative to direct control utterances, indirect replies engaged dorsomedial prefrontal cortex, right temporo-parietal junction and insula, as well as bilateral inferior frontal gyrus and right medial temporal gyrus. This suggests that listeners take the speaker's perspective on both cognitive (theory of mind) and affective (empathy-like) levels. In line with classic pragmatic theories, our results also indicate that currently popular "simulationist" accounts of language comprehension fail to explain how listeners understand the speaker's intended message.
  • Bastiaansen, M. C. M., Oostenveld, R., Jensen, O., & Hagoort, P. (2008). I see what you mean: Theta power increases are involved in the retrieval of lexical semantic information. Brain and Language, 106(1), 15-28. doi:10.1016/j.bandl.2007.10.006.

    Abstract

    An influential hypothesis regarding the neural basis of the mental lexicon is that semantic representations are neurally implemented as distributed networks carrying sensory, motor and/or more abstract functional information. This work investigates whether the semantic properties of words partly determine the topography of such networks. Subjects performed a visual lexical decision task while their EEG was recorded. We compared the EEG responses to nouns with either visual semantic properties (VIS, referring to colors and shapes) or with auditory semantic properties (AUD, referring to sounds). A time–frequency analysis of the EEG revealed power increases in the theta (4–7 Hz) and lower-beta (13–18 Hz) frequency bands, and an early power increase and subsequent decrease for the alpha (8–12 Hz) band. In the theta band we observed a double dissociation: temporal electrodes showed larger theta power increases in the AUD condition, while occipital leads showed larger theta responses in the VIS condition. The results support the notion that semantic representations are stored in functional networks with a topography that reflects the semantic properties of the stored items, and provide further evidence that oscillatory brain dynamics in the theta frequency range are functionally related to the retrieval of lexical semantic information.
  • Bauer, B. L. M., & Mota, M. (2018). On language, cognition, and the brain: An interview with Peter Hagoort. Sobre linguagem, cognição e cérebro: Uma entrevista com Peter Hagoort. Revista da Anpoll, (45), 291-296. doi:10.18309/anp.v1i45.1179.

    Abstract

    Managing Director of the Max Planck Institute for Psycholinguistics, founding Director of the Donders Centre for Cognitive Neuroimaging (DCCN, 1999), and professor of Cognitive Neuroscience at Radboud University, all located in Nijmegen, the Netherlands, PETER HAGOORT examines how the brain controls language production and comprehension. He was one of the first to integrate psychological theory and models from neuroscience in an attempt to understand how the human language faculty is instantiated in the brain.
  • Bavin, E. L., Kidd, E., Prendergast, L., Baker, E., Dissanayake, C., & Prior, M. (2014). Severity of autism is related to children's language processing. Autism Research, 7(6), 687-694. doi:10.1002/aur.1410.

    Abstract

    Problems in language processing have been associated with autism spectrum disorder (ASD), with some research attributing the problems to overall language skills rather than a diagnosis of ASD. Lexical access was assessed in a looking-while-listening task in three groups of 5- to 7-year-old children; two had high-functioning ASD (HFA), an ASD severe (ASD-S) group (n = 16) and an ASD moderate (ASD-M) group (n = 21). The third group were typically developing (TD) (n = 48). Participants heard sentences of the form “Where's the x?” and their eye movements to targets (e.g., train), phonological competitors (e.g., tree), and distractors were recorded. Proportions of looking time at target were analyzed within 200 ms intervals. Significant group differences were found between the ASD-S and TD groups only, at time intervals 1000–1200 and 1200–1400 ms postonset. The TD group was more likely to be fixated on target. These differences were maintained after adjusting for language, verbal and nonverbal IQ, and attention scores. An analysis using parent report of autistic-like behaviors showed higher scores to be associated with lower proportions of looking time at target, regardless of group. Further analysis showed fixation for the TD group to be significantly faster than for the ASD-S. In addition, incremental processing was found for all groups. The study findings suggest that severity of autistic behaviors will impact significantly on children's language processing in real life situations when exposed to syntactically complex material. They also show the value of using online methods for understanding how young children with ASD process language. Autism Res 2014, 7: 687–694.
  • Becker, M., Devanna, P., Fisher, S. E., & Vernes, S. C. (2018). Mapping of Human FOXP2 Enhancers Reveals Complex Regulation. Frontiers in Molecular Neuroscience, 11: 47. doi:10.3389/fnmol.2018.00047.

    Abstract

    Mutations of the FOXP2 gene cause a severe speech and language disorder, providing a molecular window into the neurobiology of language. Individuals with FOXP2 mutations have structural and functional alterations affecting brain circuits that overlap with sites of FOXP2 expression, including regions of the cortex, striatum, and cerebellum. FOXP2 displays complex patterns of expression in the brain, as well as in non-neuronal tissues, suggesting that sophisticated regulatory mechanisms control its spatio-temporal expression. However, to date, little is known about the regulation of FOXP2 or the genomic elements that control its expression. Using chromatin conformation capture (3C), we mapped the human FOXP2 locus to identify putative enhancer regions that engage in long-range interactions with the promoter of this gene. We demonstrate the ability of the identified enhancer regions to drive gene expression. We also show regulation of the FOXP2 promoter and enhancer regions by candidate regulators – FOXP family and TBR1 transcription factors. These data point to regulatory elements that may contribute to the temporal- or tissue-specific expression patterns of human FOXP2. Understanding the upstream regulatory pathways controlling FOXP2 expression will bring new insight into the molecular networks contributing to human language and related disorders.
  • Beckmann, N. S., Indefrey, P., & Petersen, W. (2018). Words count, but thoughts shift: A frame-based account to conceptual shifts in noun countability. Voprosy Kognitivnoy Lingvistiki (Issues of Cognitive Linguistics ), 2, 79-89. doi:10.20916/1812-3228-2018-2-79-89.

    Abstract

    The current paper proposes a frame-based account to conceptual shifts in the countability do-main. We interpret shifts in noun countability as syntactically driven metonymy. Inserting a noun in an incongruent noun phrase, that is combining it with a determiner of the other countability class, gives rise to a re-interpretation of the noun referent. We assume lexical entries to be three-fold frame com-plexes connecting conceptual knowledge representations with language-specific form representations via a lemma level. Empirical data from a lexical decision experiment are presented, that support the as-sumption of such a lemma level connecting perceptual input of linguistic signs to conceptual knowledge.
  • Belke, E., Humphreys, G. W., Watson, D. G., Meyer, A. S., & Telling, A. L. (2008). Top-down effects of semantic knowledge in visual search are modulated by cognitive but not perceptual load. Perception & Psychophysics, 70, 1444-1458. doi:10.3758/PP.70.8.1444.

    Abstract

    Moores, Laiti, and Chelazzi (2003) found semantic interference from associate competitors during visual object search, demonstrating the existence of top-down semantic influences on the deployment of attention to objects. We examined whether effects of semantically related competitors (same-category members or associates) interacted with the effects of perceptual or cognitive load. We failed to find any interaction between competitor effects and perceptual load. However, the competitor effects increased significantly when participants were asked to retain one or five digits in memory throughout the search task. Analyses of eye movements and viewing times showed that a cognitive load did not affect the initial allocation of attention but rather the time it took participants to accept or reject an object as the target. We discuss the implications of our findings for theories of conceptual short-term memory and visual attention.
  • Belpaeme, T., Vogt, P., Van den Berghe, R., Bergmann, K., Göksun, T., De Haas, M., Kanero, J., Kennedy, J., Küntay, A. C., Oudgenoeg-Paz, O., Papadopoulos, F., Schodde, T., Verhagen, J., Wallbridge, C. D., Willemsen, B., De Wit, J., Geçkin, V., Hoffmann, L., Kopp, S., Krahmer, E. and 4 moreBelpaeme, T., Vogt, P., Van den Berghe, R., Bergmann, K., Göksun, T., De Haas, M., Kanero, J., Kennedy, J., Küntay, A. C., Oudgenoeg-Paz, O., Papadopoulos, F., Schodde, T., Verhagen, J., Wallbridge, C. D., Willemsen, B., De Wit, J., Geçkin, V., Hoffmann, L., Kopp, S., Krahmer, E., Mamus, E., Montanier, J.-M., Oranç, C., & Pandey, A. K. (2018). Guidelines for designing social robots as second language tutors. International Journal of Social Robotics, 10(3), 325-341. doi:10.1007/s12369-018-0467-6.

    Abstract

    In recent years, it has been suggested that social robots have potential as tutors and educators for both children and adults. While robots have been shown to be effective in teaching knowledge and skill-based topics, we wish to explore how social robots can be used to tutor a second language to young children. As language learning relies on situated, grounded and social learning, in which interaction and repeated practice are central, social robots hold promise as educational tools for supporting second language learning. This paper surveys the developmental psychology of second language learning and suggests an agenda to study how core concepts of second language learning can be taught by a social robot. It suggests guidelines for designing robot tutors based on observations of second language learning in human–human scenarios, various technical aspects and early studies regarding the effectiveness of social robots as second language tutors.
  • Benítez-Burraco, A., & Dediu, D. (2018). Ancient DNA and language evolution: A special section. Journal of Language Evolution, 3(1), 47-48. doi:10.1093/jole/lzx024.
  • Bentz, C., Dediu, D., Verkerk, A., & Jäger, G. (2018). The evolution of language families is shaped by the environment beyond neutral drift. Nature Human Behaviour, 2, 816-821. doi:10.1038/s41562-018-0457-6.

    Abstract

    There are more than 7,000 languages spoken in the world today1. It has been argued that the natural and social environment of languages drives this diversity. However, a fundamental question is how strong are environmental pressures, and does neutral drift suffice as a mechanism to explain diversification? We estimate the phylogenetic signals of geographic dimensions, distance to water, climate and population size on more than 6,000 phylogenetic trees of 46 language families. Phylogenetic signals of environmental factors are generally stronger than expected under the null hypothesis of no relationship with the shape of family trees. Importantly, they are also—in most cases—not compatible with neutral drift models of constant-rate change across the family tree branches. Our results suggest that language diversification is driven by further adaptive and non-adaptive pressures. Language diversity cannot be understood without modelling the pressures that physical, ecological and social factors exert on language users in different environments across the globe.
  • Benyamin, B., St Pourcain, B., Davis, O. S., Davies, G., Hansell, N. K., Brion, M.-J., Kirkpatrick, R. M., Cents, R. A. M., Franić, S., Miller, M. B., Haworth, C. M. A., Meaburn, E., Price, T. S., Evans, D. M., Timpson, N., Kemp, J., Ring, S., McArdle, W., Medland, S. E., Yang, J. and 23 moreBenyamin, B., St Pourcain, B., Davis, O. S., Davies, G., Hansell, N. K., Brion, M.-J., Kirkpatrick, R. M., Cents, R. A. M., Franić, S., Miller, M. B., Haworth, C. M. A., Meaburn, E., Price, T. S., Evans, D. M., Timpson, N., Kemp, J., Ring, S., McArdle, W., Medland, S. E., Yang, J., Harris, S. E., Liewald, D. C., Scheet, P., Xiao, X., Hudziak, J. J., de Geus, E. J. C., Jaddoe, V. W. V., Starr, J. M., Verhulst, F. C., Pennell, C., Tiemeier, H., Iacono, W. G., Palmer, L. J., Montgomery, G. W., Martin, N. G., Boomsma, D. I., Posthuma, D., McGue, M., Wright, M. J., Davey Smith, G., Deary, I. J., Plomin, R., & Visscher, P. M. (2014). Childhood intelligence is heritable, highly polygenic and associated with FNBP1L. Molecular Psychiatry, 19(2), 253-258. doi:10.1038/mp.2012.184.

    Abstract

    Intelligence in childhood, as measured by psychometric cognitive tests, is a strong predictor of many important life outcomes, including educational attainment, income, health and lifespan. Results from twin, family and adoption studies are consistent with general intelligence being highly heritable and genetically stable throughout the life course. No robustly associated genetic loci or variants for childhood intelligence have been reported. Here, we report the first genome-wide association study (GWAS) on childhood intelligence (age range 6–18 years) from 17 989 individuals in six discovery and three replication samples. Although no individual single-nucleotide polymorphisms (SNPs) were detected with genome-wide significance, we show that the aggregate effects of common SNPs explain 22–46% of phenotypic variation in childhood intelligence in the three largest cohorts (P=3.9 × 10−15, 0.014 and 0.028). FNBP1L, previously reported to be the most significantly associated gene for adult intelligence, was also significantly associated with childhood intelligence (P=0.003). Polygenic prediction analyses resulted in a significant correlation between predictor and outcome in all replication cohorts. The proportion of childhood intelligence explained by the predictor reached 1.2% (P=6 × 10−5), 3.5% (P=10−3) and 0.5% (P=6 × 10−5) in three independent validation cohorts. Given the sample sizes, these genetic prediction results are consistent with expectations if the genetic architecture of childhood intelligence is like that of body mass index or height. Our study provides molecular support for the heritability and polygenic nature of childhood intelligence. Larger sample sizes will be required to detect individual variants with genome-wide significance.
  • Bercelli, F., Rossano, F., & Viaro, M. (2008). Different place, different action: Clients' personal narratives in psychotherapy. Text and Talk, 28(3), 283-305. doi:10.1515/TEXT.2008.014.

    Abstract

    This paper deals with clients' personal narratives in psychotherapy. Using the method of conversation analysis, we focus on actions and tasks accomplished through clients' narratives. We identify, within the overall structural organization of therapeutic talk in our corpus, two different sequential placements of clients' narratives and describe some of their distinctive features. When they are placed within an inquiry phase of the session and are solicited by therapists' questions, the clients' narratives mainly provide information for therapists in the service of their inquiring agenda. When placed within an elaboration phase of the session, personal narratives are regularly volunteered by clients and produced as responses to therapists' reinterpretations, i.e., statements working up clients' circumstances as previously described by clients. In this latter placement, they mainly offer further evidence relevant to the therapists' reinterpretations, and thus show how clients understand therapists' reinterpretations and what they make of them. The import of these findings, for both an explication of therapeutic techniques and a better understanding of the therapeutic process, is also discussed.

    Files private

    Request files
  • Bergmann, C., & Cristia, A. (2018). Environmental influences on infants’ native vowel discrimination: The case of talker number in daily life. Infancy, 23(4), 484-501. doi:10.1111/infa.12232.

    Abstract

    Both quality and quantity of speech from the primary caregiver have been found to impact language development. A third aspect of the input has been largely ignored: the number of talkers who provide input. Some infants spend most of their waking time with only one person; others hear many different talkers. Even if the very same words are spoken the same number of times, the pronunciations can be more variable when several talkers pronounce them. Is language acquisition affected by the number of people who provide input? To shed light on the possible link between how many people provide input in daily life and infants’ native vowel discrimination, three age groups were tested: 4-month-olds (before attunement to native vowels), 6-month-olds (at the cusp of native vowel attunement) and 12-month-olds (well attuned to the native vowel system). No relationship was found between talker number and native vowel discrimination skills in 4- and 6-month-olds, who are overall able to discriminate the vowel contrast. At 12 months, we observe a small positive relationship, but further analyses reveal that the data are also compatible with the null hypothesis of no relationship. Implications in the context of infant language acquisition and cognitive development are discussed.
  • Bergmann, C., Tsuji, S., Piccinini, P. E., Lewis, M. L., Braginsky, M. B., Frank, M. C., & Cristia, A. (2018). Promoting replicability in developmental research through meta-analyses: Insights from language acquisition research. Child Development, 89(6), 1996-2009. doi:10.1111/cdev.13079.

    Abstract

    Previous work suggests key factors for replicability, a necessary feature for theory
    building, include statistical power and appropriate research planning. These factors are examined by analyzing a collection of 12 standardized meta-analyses on language development between birth and 5 years. With a median effect size of Cohen's d= 0.45 and typical sample size of 18 participants, most research is underpowered (range: 6%-99%;
    median 44%); and calculating power based on seminal publications is not a suitable strategy.
    Method choice can be improved, as shown in analyses on exclusion rates and effect size as a
    function of method. The article ends with a discussion on how to increase replicability in both language acquisition studies specifically and developmental research more generally.
  • Berkers, R. M. W. J., Ekman, M., van Dongen, E. V., Takashima, A., Barth, M., Paller, K. A., & Fernández, G. (2018). Cued reactivation during slow-wave sleep induces brain connectivity changes related to memory stabilization. Scientific Reports, 8: 16958. doi:10.1038/s41598-018-35287-6.

    Abstract

    Memory reprocessing following acquisition enhances memory consolidation. Specifically, neural activity during encoding is thought to be ‘replayed’ during subsequent slow-wave sleep. Such memory replay is thought to contribute to the functional reorganization of neural memory traces. In particular, memory replay may facilitate the exchange of information across brain regions by inducing a reconfiguration of connectivity across the brain. Memory reactivation can be induced by external cues through a procedure known as “targeted memory reactivation”. Here, we analysed data from a published study with auditory cues used to reactivate visual object-location memories during slow-wave sleep. We characterized effects of memory reactivation on brain network connectivity using graph-theory. We found that cue presentation during slow-wave sleep increased global network integration of occipital cortex, a visual region that was also active during retrieval of object locations. Although cueing did not have an overall beneficial effect on the retention of cued versus uncued associations, individual differences in overnight memory stabilization were related to enhanced network integration of occipital cortex. Furthermore, occipital cortex displayed enhanced connectivity with mnemonic regions, namely the hippocampus, parahippocampal gyrus, thalamus and medial prefrontal cortex during cue sound presentation. Together, these results suggest a neural mechanism where cue-induced replay during sleep increases integration of task-relevant perceptual regions with mnemonic regions. This cross-regional integration may be instrumental for the consolidation and long-term storage of enduring memories.

    Additional information

    41598_2018_35287_MOESM1_ESM.doc
  • Berrettini, W., Yuan, X., Tozzi, F., Song, K., Francks, C., Chilcoat, H., Waterworth, D., Muglia, P., & Mooser, V. (2008). Alpha-5/alpha-3 nicotinic receptor subunit alleles increase risk for heavy smoking. Molecular Psychiatry, 13, 368-373. doi:10.1038/sj.mp.4002154.

    Abstract

    Twin studies indicate that additive genetic effects explain most of the variance in nicotine dependence (ND), a construct emphasizing habitual heavy smoking despite adverse consequences, tolerance and withdrawal. To detect ND alleles, we assessed cigarettes per day (CPD) regularly smoked, in two European populations via whole genome association techniques. In these approximately 7500 persons, a common haplotype in the CHRNA3-CHRNA5 nicotinic receptor subunit gene cluster was associated with CPD (nominal P=6.9 x 10(-5)). In a third set of European populations (n= approximately 7500) which had been genotyped for approximately 6000 SNPs in approximately 2000 genes, an allele in the same haplotype was associated with CPD (nominal P=2.6 x 10(-6)). These results (in three independent populations of European origin, totaling approximately 15 000 individuals) suggest that a common haplotype in the CHRNA5/CHRNA3 gene cluster on chromosome 15 contains alleles, which predispose to ND.

    Additional information

    Suppl.Material.doc
  • Besharati, S., Forkel, S. J., Kopelman, M., Solms, M., Jenkinson, P. M., & Fotopoulou, A. (2014). The affective modulation of motor awareness in anosognosia for hemiplegia: Behavioural and lesion evidence. Cortex, 61, 127-140. doi:10.1016/j.cortex.2014.08.016.

    Abstract

    The possible role of emotion in anosognosia for hemiplegia (i.e., denial of motor deficits contralateral to a brain lesion), has long been debated between psychodynamic and neurocognitive theories. However, there are only a handful of case studies focussing on this topic, and the precise role of emotion in anosognosia for hemiplegia requires empirical investigation. In the present study, we aimed to investigate how negative and positive emotions influence motor awareness in anosognosia. Positive and negative emotions were induced under carefully-controlled experimental conditions in right-hemisphere stroke patients with anosognosia for hemiplegia (n = 11) and controls with clinically normal awareness (n = 10). Only the negative, emotion induction condition resulted in a significant improvement of motor awareness in anosognosic patients compared to controls; the positive emotion induction did not. Using lesion overlay and voxel-based lesion-symptom mapping approaches, we also investigated the brain lesions associated with the diagnosis of anosognosia, as well as with performance on the experimental task. Anatomical areas that are commonly damaged in AHP included the right-hemisphere motor and sensory cortices, the inferior frontal cortex, and the insula. Additionally, the insula, putamen and anterior periventricular white matter were associated with less awareness change following the negative emotion induction. This study suggests that motor unawareness and the observed lack of negative emotions about one's disabilities cannot be adequately explained by either purely motivational or neurocognitive accounts. Instead, we propose an integrative account in which insular and striatal lesions result in weak interoceptive and motivational signals. These deficits lead to faulty inferences about the self, involving a difficulty to personalise new sensorimotor information, and an abnormal adherence to premorbid beliefs about the body.

    Additional information

    supplementary file
  • Bidgood, A., Ambridge, B., Pine, J. M., & Rowland, C. F. (2014). The retreat from locative overgeneralisation errors: A novel verb grammaticality judgment study. PLoS One, 9(5): e97634. doi:10.1371/journal.pone.0097634.

    Abstract

    Whilst some locative verbs alternate between the ground- and figure-locative constructions (e.g. Lisa sprayed the flowers with water/Lisa sprayed water onto the flowers), others are restricted to one construction or the other (e.g. *Lisa filled water into the cup/*Lisa poured the cup with water). The present study investigated two proposals for how learners (aged 5–6, 9–10 and adults) acquire this restriction, using a novel-verb-learning grammaticality-judgment paradigm. In support of the semantic verb class hypothesis, participants in all age groups used the semantic properties of novel verbs to determine the locative constructions (ground/figure/both) in which they could and could not appear. In support of the frequency hypothesis, participants' tolerance of overgeneralisation errors decreased with each increasing level of verb frequency (novel/low/high). These results underline the need to develop an integrated account of the roles of semantics and frequency in the retreat from argument structure overgeneralisation.
  • Böckler, A., Hömke, P., & Sebanz, N. (2014). Invisible Man: Exclusion from shared attention affects gaze behavior and self-reports. Social Psychological and Personality Science, 5(2), 140-148. doi:10.1177/1948550613488951.

    Abstract

    Social exclusion results in lowered satisfaction of basic needs and shapes behavior in subsequent social situations. We investigated
    participants’ immediate behavioral response during exclusion from an interaction that consisted of establishing eye contact. A
    newly developed eye-tracker-based ‘‘looking game’’ was employed; participants exchanged looks with two virtual partners in an
    exchange where the player who had just been looked at chose whom to look at next. While some participants received as many
    looks as the virtual players (included), others were ignored after two initial looks (excluded). Excluded participants reported lower
    basic need satisfaction, lower evaluation of the interaction, and devaluated their interaction partners more than included
    participants, demonstrating that people are sensitive to epistemic ostracism. In line with William’s need-threat model,
    eye-tracking results revealed that excluded participants did not withdraw from the unfavorable interaction, but increased the
    number of looks to the player who could potentially reintegrate them.
  • De Boer, B., & Thompson, B. (2018). Biology-culture co-evolution in finite populations. Scientific Reports, 8: 1209. doi:10.1038/s41598-017-18928-0.

    Abstract

    Language is the result of two concurrent evolutionary processes: Biological and cultural inheritance. An influential evolutionary hypothesis known as the moving target problem implies inherent limitations on the interactions between our two inheritance streams that result from a difference in pace: The speed of cultural evolution is thought to rule out cognitive adaptation to culturally evolving aspects of language. We examine this hypothesis formally by casting it as as a problem of adaptation in time-varying environments. We present a mathematical model of biology-culture co-evolution in finite populations: A generalisation of the Moran process, treating co-evolution as coupled non-independent Markov processes, providing a general formulation of the moving target hypothesis in precise probabilistic terms. Rapidly varying culture decreases the probability of biological adaptation. However, we show that this effect declines with population size and with stronger links between biology and culture: In realistically sized finite populations, stochastic effects can carry cognitive specialisations to fixation in the face of variable culture, especially if the effects of those specialisations are amplified through cultural evolution. These results support the view that language arises from interactions between our two major inheritance streams, rather than from one primary evolutionary process that dominates another. © 2018 The Author(s).

    Additional information

    41598_2017_18928_MOESM1_ESM.pdf
  • De Boer, B., & Perlman, M. (2014). Physical mechanisms may be as important as brain mechanisms in evolution of speech [Commentary on Ackerman, Hage, & Ziegler. Brain Mechanisms of acoustic communication in humans and nonhuman primates: an evolutionary perspective]. Behavioral and Brain Sciences, 37(6), 552-553. doi:10.1017/S0140525X13004007.

    Abstract

    We present two arguments why physical adaptations for vocalization may be as important as neural adaptations. First, fine control over vocalization is not easy for physical reasons, and modern humans may be exceptional. Second, we present an example of a gorilla that shows rudimentary voluntary control over vocalization, indicating that some neural control is already shared with great apes.
  • Bögels, S., Casillas, M., & Levinson, S. C. (2018). Planning versus comprehension in turn-taking: Fast responders show reduced anticipatory processing of the question. Neuropsychologia, 109, 295-310. doi:10.1016/j.neuropsychologia.2017.12.028.

    Abstract

    Rapid response latencies in conversation suggest that responders start planning before the ongoing turn is finished. Indeed, an earlier EEG study suggests that listeners start planning their responses to questions as soon as they can (Bögels, S., Magyari, L., & Levinson, S. C. (2015). Neural signatures of response planning occur midway through an incoming question in conversation. Scientific Reports, 5, 12881). The present study aimed to (1) replicate this early planning effect and (2) investigate whether such early response planning incurs a cost on participants’ concurrent comprehension of the ongoing turn. During the experiment participants answered questions from a confederate partner. To address aim (1), the questions were designed such that response planning could start either early or late in the turn. Our results largely replicate Bögels et al. (2015) showing a large positive ERP effect and an oscillatory alpha/beta reduction right after participants could have first started planning their verbal response, again suggesting an early start of response planning. To address aim (2), the confederate's questions also contained either an expected word or an unexpected one to elicit a differential N400 effect, either before or after the start of response planning. We hypothesized an attenuated N400 effect after response planning had started. In contrast, the N400 effects before and after planning did not differ. There was, however, a positive correlation between participants' response time and their N400 effect size after planning had started; quick responders showed a smaller N400 effect, suggesting reduced attention to comprehension and possibly reduced anticipatory processing. We conclude that early response planning can indeed impact comprehension processing.

    Additional information

    mmc1.pdf
  • Bolton, J. L., Hayward, C., Direk, N., Lewis, J. G., Hammond, G. L., Hill, L. A., Anderson, A., Huffman, J., Wilson, J. F., Campbell, H., Rudan, I., Wright, A., Hastie, N., Wild, S. H., Velders, F. P., Hofman, A., Uitterlinden, A. G., Lahti, J., Räikkönen, K., Kajantie, E. and 37 moreBolton, J. L., Hayward, C., Direk, N., Lewis, J. G., Hammond, G. L., Hill, L. A., Anderson, A., Huffman, J., Wilson, J. F., Campbell, H., Rudan, I., Wright, A., Hastie, N., Wild, S. H., Velders, F. P., Hofman, A., Uitterlinden, A. G., Lahti, J., Räikkönen, K., Kajantie, E., Widen, E., Palotie, A., Eriksson, J. G., Kaakinen, M., Järvelin, M.-R., Timpson, N. J., Davey Smith, G., Ring, S. M., Evans, D. M., St Pourcain, B., Tanaka, T., Milaneschi, Y., Bandinelli, S., Ferrucci, L., van der Harst, P., Rosmalen, J. G. M., Bakker, S. J. L., Verweij, N., Dullaart, R. P. F., Mahajan, A., Lindgren, C. M., Morris, A., Lind, L., Ingelsson, E., Anderson, L. N., Pennell, C. E., Lye, S. J., Matthews, S. G., Eriksson, J., Mellstrom, D., Ohlsson, C., Price, J. F., Strachan, M. W. J., Reynolds, R. M., Tiemeier, H., Walker, B. R., & CORtisol NETwork (CORNET) Consortium (2014). Genome Wide Association Identifies Common Variants at the SERPINA6/SERPINA1 Locus Influencing Plasma Cortisol and Corticosteroid Binding Globulin. PLoS Genetics, 10(7): e1004474. doi:10.1371/journal.pgen.1004474.

    Abstract

    Variation in plasma levels of cortisol, an essential hormone in the stress response, is associated in population-based studies with cardio-metabolic, inflammatory and neuro-cognitive traits and diseases. Heritability of plasma cortisol is estimated at 30-60% but no common genetic contribution has been identified. The CORtisol NETwork (CORNET) consortium undertook genome wide association meta-analysis for plasma cortisol in 12,597 Caucasian participants, replicated in 2,795 participants. The results indicate that <1% of variance in plasma cortisol is accounted for by genetic variation in a single region of chromosome 14. This locus spans SERPINA6, encoding corticosteroid binding globulin (CBG, the major cortisol-binding protein in plasma), and SERPINA1, encoding α1-antitrypsin (which inhibits cleavage of the reactive centre loop that releases cortisol from CBG). Three partially independent signals were identified within the region, represented by common SNPs; detailed biochemical investigation in a nested sub-cohort showed all these SNPs were associated with variation in total cortisol binding activity in plasma, but some variants influenced total CBG concentrations while the top hit (rs12589136) influenced the immunoreactivity of the reactive centre loop of CBG. Exome chip and 1000 Genomes imputation analysis of this locus in the CROATIA-Korcula cohort identified missense mutations in SERPINA6 and SERPINA1 that did not account for the effects of common variants. These findings reveal a novel common genetic source of variation in binding of cortisol by CBG, and reinforce the key role of CBG in determining plasma cortisol levels. In turn this genetic variation may contribute to cortisol-associated degenerative diseases.
  • Bosker, H. R., & Ghitza, O. (2018). Entrained theta oscillations guide perception of subsequent speech: Behavioral evidence from rate normalization. Language, Cognition and Neuroscience, 33(8), 955-967. doi:10.1080/23273798.2018.1439179.

    Abstract

    This psychoacoustic study provides behavioral evidence that neural entrainment in the theta range (3-9 Hz) causally shapes speech perception. Adopting the ‘rate normalization’ paradigm (presenting compressed carrier sentences followed by uncompressed target words), we show that uniform compression of a speech carrier to syllable rates inside the theta range influences perception of subsequent uncompressed targets, but compression outside theta range does not. However, the influence of carriers – compressed outside theta range – on target perception is salvaged when carriers are ‘repackaged’ to have a packet rate inside theta. This suggests that the brain can only successfully entrain to syllable/packet rates within theta range, with a causal influence on the perception of subsequent speech, in line with recent neuroimaging data. Thus, this study points to a central role for sustained theta entrainment in rate normalization and contributes to our understanding of the functional role of brain oscillations in speech perception.
  • Bosker, H. R., Quené, H., Sanders, T. J. M., & de Jong, N. H. (2014). Native 'um's elicit prediction of low-frequency referents, but non-native 'um's do not. Journal of Memory and Language, 75, 104-116. doi:10.1016/j.jml.2014.05.004.

    Abstract

    Speech comprehension involves extensive use of prediction. Linguistic prediction may be guided by the semantics or syntax, but also by the performance characteristics of the speech signal, such as disfluency. Previous studies have shown that listeners, when presented with the filler uh, exhibit a disfluency bias for discourse-new or unknown referents, drawing inferences about the source of the disfluency. The goal of the present study is to study the contrast between native and non-native disfluencies in speech comprehension. Experiment 1 presented listeners with pictures of high-frequency (e.g., a hand) and low-frequency objects (e.g., a sewing machine) and with fluent and disfluent instructions. Listeners were found to anticipate reference to low-frequency objects when encountering disfluency, thus attributing disfluency to speaker trouble in lexical retrieval. Experiment 2 showed that, when participants listened to disfluent non-native speech, no anticipation of low-frequency referents was observed. We conclude that listeners can adapt their predictive strategies to the (non-native) speaker at hand, extending our understanding of the role of speaker identity in speech comprehension.
  • Bosker, H. R. (2018). Putting Laurel and Yanny in context. The Journal of the Acoustical Society of America, 144(6), EL503-EL508. doi:10.1121/1.5070144.

    Abstract

    Recently, the world’s attention was caught by an audio clip that was perceived as “Laurel” or “Yanny”. Opinions were sharply split: many could not believe others heard something different from their perception. However, a crowd-source experiment with >500 participants shows that it is possible to make people hear Laurel, where they previously heard Yanny, by manipulating preceding acoustic context. This study is not only the first to reveal within-listener variation in Laurel/Yanny percepts, but also to demonstrate contrast effects for global spectral information in larger frequency regions. Thus, it highlights the intricacies of human perception underlying these social media phenomena.
  • Bosker, H. R., & Cooke, M. (2018). Talkers produce more pronounced amplitude modulations when speaking in noise. The Journal of the Acoustical Society of America, 143(2), EL121-EL126. doi:10.1121/1.5024404.

    Abstract

    Speakers adjust their voice when talking in noise (known as Lombard speech), facilitating speech comprehension. Recent neurobiological models of speech perception emphasize the role of amplitude modulations in speech-in-noise comprehension, helping neural oscillators to ‘track’ the attended speech. This study tested whether talkers produce more pronounced amplitude modulations in noise. Across four different corpora, modulation spectra showed greater power in amplitude modulations below 4 Hz in Lombard speech compared to matching plain speech. This suggests that noise-induced speech contains more pronounced amplitude modulations, potentially helping the listening brain to entrain to the attended talker, aiding comprehension.
  • Bosker, H. R., Quené, H., Sanders, T. J. M., & de Jong, N. H. (2014). The perception of fluency in native and non-native speech. Language Learning, 64, 579-614. doi:10.1111/lang.12067.

    Abstract

    Where native speakers supposedly are fluent by default, non-native speakers often have to strive hard to achieve a native-like fluency level. However, disfluencies (such as pauses, fillers, repairs, etc.) occur in both native and non-native speech and it is as yet unclear ow luency raters weigh the fluency characteristics of native and non-native speech. Two rating experiments compared the way raters assess the luency of native and non-native speech. The fluency characteristics of native and non- native speech were controlled by using phonetic anipulations in pause (Experiment 1) and speed characteristics (Experiment 2). The results show that the ratings on manipulated native and on-native speech were affected in a similar fashion. This suggests that there is no difference in the way listeners weigh the fluency haracteristics of native and non-native speakers.
  • Bowerman, M. (1976). Commentary on M.D.S. Braine, “Children's first word combinations”. Monographs of the Society for Research in Child Development, 41(1), 98-104. Retrieved from http://www.jstor.org/stable/1165959.
  • Brand, S., & Ernestus, M. (2018). Listeners’ processing of a given reduced word pronunciation variant directly reflects their exposure to this variant: evidence from native listeners and learners of French. Quarterly Journal of Experimental Psychology, 71(5), 1240-1259. doi:10.1080/17470218.2017.1313282.

    Abstract

    n casual conversations, words often lack segments. This study investigates whether listeners rely on their experience with reduced word pronunciation variants during the processing of single segment reduction. We tested three groups of listeners in a lexical decision experiment with French words produced either with or without word-medial schwa (e.g., /ʀəvy/ and /ʀvy/ for revue). Participants also rated the relative frequencies of the two pronunciation variants of the words. If the recognition accuracy and reaction times for a given listener group correlate best with the frequencies of occurrence holding for that given listener group, recognition is influenced by listeners’ exposure to these variants. Native listeners' relative frequency ratings correlated well with their accuracy scores and RTs. Dutch advanced learners' accuracy scores and RTs were best predicted by their own ratings. In contrast, the accuracy and RTs from Dutch beginner learners of French could not be predicted by any relative frequency rating; the rating task was probably too difficult for them. The participant groups showed behaviour reflecting their difference in experience with the pronunciation variants. Our results strongly suggest that listeners store the frequencies of occurrence of pronunciation variants, and consequently the variants themselves
  • Brand, J., Monaghan, P., & Walker, P. (2018). The changing role of sound‐symbolism for small versus large vocabularies. Cognitive Science, 42(S2), 578-590. doi:10.1111/cogs.12565.

    Abstract

    Natural language contains many examples of sound‐symbolism, where the form of the word carries information about its meaning. Such systematicity is more prevalent in the words children acquire first, but arbitrariness dominates during later vocabulary development. Furthermore, systematicity appears to promote learning category distinctions, which may become more important as the vocabulary grows. In this study, we tested the relative costs and benefits of sound‐symbolism for word learning as vocabulary size varies. Participants learned form‐meaning mappings for words which were either congruent or incongruent with regard to sound‐symbolic relations. For the smaller vocabulary, sound‐symbolism facilitated learning individual words, whereas for larger vocabularies sound‐symbolism supported learning category distinctions. The changing properties of form‐meaning mappings according to vocabulary size may reflect the different ways in which language is learned at different stages of development.

    Additional information

    https://git.io/v5BXJ
  • Broeder, D., & Lannom, L. (2014). Data Type Registries: A Research Data Alliance Working Group. D-Lib Magazine, 20, 1. doi:10.1045/january2014-broeder.

    Abstract

    Automated processing of large amounts of scientific data, especially across domains, requires that the data can be selected and parsed without human intervention. Precise characterization of that data, as in typing, is needed once the processing goes beyond the realm of domain specific or local research group assumptions. The Research Data Alliance (RDA) Data Type Registries Working Group (DTR-WG) was assembled to address this issue through the creation of a Data Type Registry methodology, data model, and prototype. The WG was approved by the RDA Council during March of 2013 and will complete its work in mid-2014, in between the third and fourth RDA Plenaries.
  • Broersma, M., & Cutler, A. (2008). Phantom word activation in L2. System, 36(1), 22-34. doi:10.1016/j.system.2007.11.003.

    Abstract

    L2 listening can involve the phantom activation of words which are not actually in the input. All spoken-word recognition involves multiple concurrent activation of word candidates, with selection of the correct words achieved by a process of competition between them. L2 listening involves more such activation than L1 listening, and we report two studies illustrating this. First, in a lexical decision study, L2 listeners accepted (but L1 listeners did not accept) spoken non-words such as groof or flide as real English words. Second, a priming study demonstrated that the same spoken non-words made recognition of the real words groove, flight easier for L2 (but not L1) listeners, suggesting that, for the L2 listeners only, these real words had been activated by the spoken non-word input. We propose that further understanding of the activation and competition process in L2 lexical processing could lead to new understanding of L2 listening difficulty.
  • Broersma, M. (2008). Flexible cue use in nonnative phonetic categorization (L). Journal of the Acoustical Society of America, 124(2), 712-715. doi:10.1121/1.2940578.

    Abstract

    Native and nonnative listeners categorized final /v/ versus /f/ in English nonwords. Fricatives followed phonetically long originally /v/-preceding or short originally /f/-preceding vowels. Vowel duration was constant for each participant and sometimes mismatched other voicing cues. Previous results showed that English but not Dutch listeners whose L1 has no final voicing contrast nevertheless used the misleading vowel duration for /v/-/f/ categorization. New analyses showed that Dutch listeners did use vowel duration initially, but quickly reduced its use, whereas the English listeners used it consistently throughout the experiment. Thus, nonnative listeners adapted to the stimuli more flexibly than native listeners did.
  • Brouwer, S., & Bradlow, A. R. (2014). Contextual variability during speech-in-speech recognition. The Journal of the Acoustical Society of America, 136(1), EL26-EL32. doi:10.1121/1.4881322.

    Abstract

    This study examined the influence of background language variation on speech recognition. English listeners performed an English sentence recognition task in either “pure” background conditions in which all trials had either English or Dutch background babble or in mixed background conditions in which the background language varied across trials (i.e., a mix of English and Dutch or one of these background languages mixed with quiet trials). This design allowed the authors to compare performance on identical trials across pure and mixed conditions. The data reveal that speech-in-speech recognition is sensitive to contextual variation in terms of the target-background language (mis)match depending on the relative ease/difficulty of the test trials in relation to the surrounding trials.
  • Brown, P. (2008). Up, down, and across the land: Landscape terms and place names in Tzeltal. Language Sciences, 30(2/3), 151-181. doi:10.1016/j.langsci.2006.12.003.

    Abstract

    The Tzeltal language is spoken in a mountainous region of southern Mexico by some 280,000 Mayan corn farmers. This paper focuses on landscape and place vocabulary in the Tzeltal municipio of Tenejapa, where speakers use an absolute system of spatial reckoning based on the overall uphill (southward)/downhill (northward) slope of the land. The paper examines the formal and functional properties of the Tenejapa Tzeltal vocabulary labelling features of the local landscape and relates it to spatial vocabulary for describing locative relations, including the uphill/downhill axis for spatial reckoning as well as body part terms for specifying parts of locative grounds. I then examine the local place names, discuss their semantic and morphosyntactic properties, and relate them to the landscape vocabulary, to spatial vocabulary, and also to cultural narratives about events associated with particular places. I conclude with some observations on the determinants of landscape and place terminology in Tzeltal, and what this vocabulary and how it is used reveal about the conceptualization of landscape and places.
  • Brown, A., & Gullberg, M. (2008). Bidirectional crosslinguistic influence in L1-L2 encoding of manner in speech and gesture. Studies in Second Language Acquisition, 30(2), 225-251. doi:10.1017/S0272263108080327.

    Abstract

    Whereas most research in SLA assumes the relationship between the first language (L1) and the second language (L2) to be unidirectional, this study investigates the possibility of a bidirectional relationship. We examine the domain of manner of motion, in which monolingual Japanese and English speakers differ both in speech and gesture. Parallel influences of the L1 on the L2 and the L2 on the L1 were found in production from native Japanese speakers with intermediate knowledge of English. These effects, which were strongest in gesture patterns, demonstrate that (a) bidirectional interaction between languages in the multilingual mind can occur even with intermediate proficiency in the L2 and (b) gesture analyses can offer insights on interactions between languages beyond those observed through analyses of speech alone.
  • Brown, A. (2008). Gesture viewpoint in Japanese and English: Cross-linguistic interactions between two languages in one speaker. Gesture, 8(2), 256-276. doi:10.1075/gest.8.2.08bro.

    Abstract

    Abundant evidence across languages, structures, proficiencies, and modalities shows that properties of first languages influence performance in second languages. This paper presents an alternative perspective on the interaction between established and emerging languages within second language speakers by arguing that an L2 can influence an L1, even at relatively low proficiency levels. Analyses of the gesture viewpoint employed in English and Japanese descriptions of motion events revealed systematic between-language and within-language differences. Monolingual Japanese speakers used significantly more Character Viewpoint than monolingual English speakers, who predominantly employed Observer Viewpoint. In their L1 and their L2, however, native Japanese speakers with intermediate knowledge of English patterned more like the monolingual English speakers than their monolingual Japanese counterparts. After controlling for effects of cultural exposure, these results offer valuable insights into both the nature of cross-linguistic interactions within individuals and potential factors underlying gesture viewpoint.
  • Brown, P. (1976). Women and politeness: A new perspective on language and society. Reviews in Anthropology, 3, 240-249.
  • Brown-Schmidt, S., & Konopka, A. E. (2008). Little houses and casas pequenas: Message formulation and syntactic form in unscripted speech with speakers of English and Spanish. Cognition, 109(2), 274-280. doi:10.1016/j.cognition.2008.07.011.

    Abstract

    During unscripted speech, speakers coordinate the formulation of pre-linguistic messages with the linguistic processes that implement those messages into speech. We examine the process of constructing a contextually appropriate message and interfacing that message with utterance planning in English (the small butterfly) and Spanish (la mariposa pequeña) during an unscripted, interactive task. The coordination of gaze and speech during formulation of these messages is used to evaluate two hypotheses regarding the lower limit on the size of message planning units, namely whether messages are planned in units isomorphous to entire phrases or units isomorphous to single lexical items. Comparing the planning of fluent pre-nominal adjectives in English and post-nominal adjectives in Spanish showed that size information is added to the message later in Spanish than English, suggesting that speakers can prepare pre-linguistic messages in lexically-sized units. The results also suggest that the speaker can use disfluency to coordinate the transition from thought to speech.
  • Brucato, N., DeLisi, L. E., Fisher, S. E., & Francks, C. (2014). Hypomethylation of the paternally inherited LRRTM1 promoter linked to schizophrenia. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 165(7), 555-563. doi:10.1002/ajmg.b.32258.

    Abstract

    Epigenetic effects on psychiatric traits remain relatively under-studied, and it remains unclear what the sizes of individual epigenetic effects may be, or how they vary between different clinical populations. The gene LRRTM1 (chromosome 2p12) has previously been linked and associated with schizophrenia in a parent-of-origin manner in a set of affected siblings (LOD = 4.72), indirectly suggesting a disruption of paternal imprinting at this locus in these families. From the same set of siblings that originally showed strong linkage at this locus, we analyzed 99 individuals using 454-bisulfite sequencing, from whole blood DNA, to measure the level of DNA methylation in the promoter region of LRRTM1. We also assessed seven additional loci that would be informative to compare. Paternal identity-by-descent sharing at LRRTM1, within sibling pairs, was linked to their similarity of methylation at the gene's promoter. Reduced methylation at the promoter showed a significant association with schizophrenia. Sibling pairs concordant for schizophrenia showed more similar methylation levels at the LRRTM1 promoter than diagnostically discordant pairs. The alleles of common SNPs spanning the locus did not explain this epigenetic linkage, which can therefore be considered as largely independent of DNA sequence variation and would not be detected in standard genetic association analysis. Our data suggest that hypomethylation at the LRRTM1 promoter, particularly of the paternally inherited allele, was a risk factor for the development of schizophrenia in this set of siblings affected with familial schizophrenia, and that had previously showed linkage at this locus in an affected-sib-pair context.
  • Bulut, T., Cheng, S. K., Xu, K. Y., Hung, D. L., & Wu, D. H. (2018). Is there a processing preference for object relative clauses in Chinese? Evidence from ERPs. Frontiers in Psychology, 9: 995. doi:10.3389/fpsyg.2018.00995.

    Abstract

    A consistent finding across head-initial languages, such as English, is that subject relative clauses (SRCs) are easier to comprehend than object relative clauses (ORCs). However, several studies in Mandarin Chinese, a head-final language, revealed the opposite pattern, which might be modulated by working memory (WM) as suggested by recent results from self-paced reading performance. In the present study, event-related potentials (ERPs) were recorded when participants with high and low WM spans (measured by forward digit span and operation span tests) read Chinese ORCs and SRCs. The results revealed an N400-P600 complex elicited by ORCs on the relativizer, whose magnitude was modulated by the WM span. On the other hand, a P600 effect was elicited by SRCs on the head noun, whose magnitude was not affected by the WM span. These findings paint a complex picture of relative clause processing in Chinese such that opposing factors involving structural ambiguities and integration of filler-gap dependencies influence processing dynamics in Chinese relative clauses.
  • Burenhult, N. (2008). Spatial coordinate systems in demonstrative meaning. Linguistic Typology, 12(1), 99-142. doi:10.1515/LITY.2008.032.

    Abstract

    Exploring the semantic encoding of a group of crosslinguistically uncommon “spatial-coordinate demonstratives”, this work establishes the existence of demonstratives whose function is to project angular search domains, thus invoking proper coordinate systems (or “frames of reference”). What is special about these distinctions is that they rely on a spatial asymmetry in relativizing a demonstrative referent (representing the Figure) to the deictic center (representing the Ground). A semantic typology of such demonstratives is constructed based on the nature of the asymmetries they employ. A major distinction is proposed between asymmetries outside the deictic Figure-Ground array (e.g., features of the larger environment) and those within it (e.g., facets of the speaker/addressee dyad). A unique system of the latter type, present in Jahai, an Aslian (Mon-Khmer) language spoken by groups of hunter-gatherers in the Malay Peninsula, is introduced and explored in detail using elicited data as well as natural conversational data captured on video. Although crosslinguistically unusual, spatial-coordinate demonstratives sit at the interface of issues central to current discourse in semantic-pragmatic theory: demonstrative function, deictic layout, and spatial frames of reference.
  • Burenhult, N. (2008). Streams of words: Hydrological lexicon in Jahai. Language Sciences, 30(2/3), 182-199. doi:10.1016/j.langsci.2006.12.005.

    Abstract

    This article investigates hydrological lexicon in Jahai, a Mon-Khmer language of the Malay Peninsula. Setting out from an analysis of the structural and semantic properties as well as the indigenous vs. borrowed origin of lexicon related to drainage, it teases out a set of distinct lexical systems for reference to and description of hydrological features. These include (1) indigenous nominal labels subcategorised by metaphor, (2) borrowed nominal labels, (3) verbals referring to properties and processes of water, (4) a set of motion verbs, and (5) place names. The lexical systems, functionally diverse and driven by different factors, illustrate that principles and strategies of geographical categorisation can vary systematically and profoundly within a single language.
  • Burenhult, N., & Levinson, S. C. (2008). Language and landscape: A cross-linguistic perspective. Language Sciences, 30(2/3), 135-150. doi:10.1016/j.langsci.2006.12.028.

    Abstract

    This special issue is the outcome of collaborative work on the relationship between language and landscape, carried out in the Language and Cognition Group at the Max Planck Institute for Psycholinguistics. The contributions explore the linguistic categories of landscape terms and place names in nine genetically, typologically and geographically diverse languages, drawing on data from first-hand fieldwork. The present introductory article lays out the reasons why the domain of landscape is of central interest to the language sciences and beyond, and it outlines some of the major patterns that emerge from the cross-linguistic comparison which the papers invite. The data point to considerable variation within and across languages in how systems of landscape terms and place names are ontologised. This has important implications for practical applications from international law to modern navigation systems.
  • Burenhult, N. (Ed.). (2008). Language and landscape: Geographical ontology in cross-linguistic perspective [Special Issue]. Language Sciences, 30(2/3).

    Abstract

    This special issue is the outcome of collaborative work on the relationship between language and landscape, carried out in the Language and Cognition Group at the Max Planck Institute for Psycholinguistics. The contributions explore the linguistic categories of landscape terms and place names in nine genetically, typologically and geographically diverse languages, drawing on data from first-hand fieldwork. The present introductory article lays out the reasons why the domain of landscape is of central interest to the language sciences and beyond, and it outlines some of the major patterns that emerge from the cross-linguistic comparison which the papers invite. The data point to considerable variation within and across languages in how systems of landscape terms and place names are ontologised. This has important implications for practical applications from international law to modern navigation systems.
  • Burkhardt, P., Avrutin, S., Piñango, M. M., & Ruigendijk, E. (2008). Slower-than-normal syntactic processing in agrammatic Broca's aphasia: Evidence from Dutch. Journal of Neurolinguistics, 21(2), 120-137. doi:10.1016/j.jneuroling.2006.10.004.

    Abstract

    Studies of agrammatic Broca's aphasia reveal a diverging pattern of performance in the comprehension of reflexive elements: offline, performance seems unimpaired, whereas online—and in contrast to both matching controls and Wernicke's patients—no antecedent reactivation is observed at the reflexive. Here we propose that this difference characterizes the agrammatic comprehension deficit as a result of slower-than-normal syntactic structure formation. To test this characterization, the comprehension of three Dutch agrammatic patients and matching control participants was investigated utilizing the cross-modal lexical decision (CMLD) interference task. Two types of reflexive-antecedent dependencies were tested, which have already been shown to exert distinct processing demands on the comprehension system as a function of the level at which the dependency was formed. Our hypothesis predicts that if the agrammatic system has a processing limitation such that syntactic structure is built in a protracted manner, this limitation will be reflected in delayed interpretation. Confirming previous findings, the Dutch patients show an effect of distinct processing demands for the two types of reflexive-antecedent dependencies but with a temporal delay. We argue that this delayed syntactic structure formation is the result of limited processing capacity that specifically affects the syntactic system.
  • Byun, K.-S., De Vos, C., Bradford, A., Zeshan, U., & Levinson, S. C. (2018). First encounters: Repair sequences in cross-signing. Topics in Cognitive Science, 10(2), 314-334. doi:10.1111/tops.12303.

    Abstract

    Most human communication is between people who speak or sign the same languages. Nevertheless, communication is to some extent possible where there is no language in common, as every tourist knows. How this works is of some theoretical interest (Levinson 2006). A nice arena to explore this capacity is when deaf signers of different languages meet for the first time, and are able to use the iconic affordances of sign to begin communication. Here we focus on Other-Initiated Repair (OIR), that is, where one signer makes clear he or she does not understand, thus initiating repair of the prior conversational turn. OIR sequences are typically of a three-turn structure (Schegloff 2007) including the problem source turn (T-1), the initiation of repair (T0), and the turn offering a problem solution (T+1). These sequences seem to have a universal structure (Dingemanse et al. 2013). We find that in most cases where such OIR occur, the signer of the troublesome turn (T-1) foresees potential difficulty, and marks the utterance with 'try markers' (Sacks & Schegloff 1979, Moerman 1988) which pause to invite recognition. The signers use repetition, gestural holds, prosodic lengthening and eyegaze at the addressee as such try-markers. Moreover, when T-1 is try-marked this allows for faster response times of T+1 with respect to T0. This finding suggests that signers in these 'first encounter' situations actively anticipate potential trouble and, through try-marking, mobilize and facilitate OIRs. The suggestion is that heightened meta-linguistic awareness can be utilized to deal with these problems at the limits of our communicational ability.
  • Cai, D., Fonteijn, H. M., Guadalupe, T., Zwiers, M., Wittfeld, K., Teumer, A., Hoogman, M., Arias Vásquez, A., Yang, Y., Buitelaar, J., Fernández, G., Brunner, H. G., Van Bokhoven, H., Franke, B., Hegenscheid, K., Homuth, G., Fisher, S. E., Grabe, H. J., Francks, C., & Hagoort, P. (2014). A genome wide search for quantitative trait loci affecting the cortical surface area and thickness of Heschl's gyrus. Genes, Brain and Behavior, 13, 675-685. doi:10.1111/gbb.12157.

    Abstract

    Heschl's gyrus (HG) is a core region of the auditory cortex whose morphology is highly variable across individuals. This variability has been linked to sound perception ability in both speech and music domains. Previous studies show that variations in morphological features of HG, such as cortical surface area and thickness, are heritable. To identify genetic variants that affect HG morphology, we conducted a genome-wide association scan (GWAS) meta-analysis in 3054 healthy individuals using HG surface area and thickness as quantitative traits. None of the single nucleotide polymorphisms (SNPs) showed association P values that would survive correction for multiple testing over the genome. The most significant association was found between right HG area and SNP rs72932726 close to gene DCBLD2 (3q12.1; P=2.77x10(-7)). This SNP was also associated with other regions involved in speech processing. The SNP rs333332 within gene KALRN (3q21.2; P=2.27x10(-6)) and rs143000161 near gene COBLL1 (2q24.3; P=2.40x10(-6)) were associated with the area and thickness of left HG, respectively. Both genes are involved in the development of the nervous system. The SNP rs7062395 close to the X-linked deafness gene POU3F4 was associated with right HG thickness (Xq21.1; P=2.38x10(-6)). This is the first molecular genetic analysis of variability in HG morphology
  • Capilla, A., Schoffelen, J.-M., Paterson, G., Thut, G., & Gross, J. (2014). Dissociated α-band modulations in the dorsal and ventral visual pathways in visuospatial attention and perception. Cerebral Cortex., 24(2), 550-561. doi:10.1093/cercor/bhs343.

    Abstract

    Modulations of occipito-parietal α-band (8–14 Hz) power that are opposite in direction (α-enhancement vs. α-suppression) and origin of generation (ipsilateral vs. contralateral to the locus of attention) are a robust correlate of anticipatory visuospatial attention. Yet, the neural generators of these α-band modulations, their interdependence across homotopic areas, and their respective contribution to subsequent perception remain unclear. To shed light on these questions, we employed magnetoencephalography, while human volunteers performed a spatially cued detection task. Replicating previous findings, we found α-power enhancement ipsilateral to the attended hemifield and contralateral α-suppression over occipitoparietal sensors. Source localization (beamforming) analysis showed that α-enhancement and suppression were generated in 2 distinct brain regions, located in the dorsal and ventral visual streams, respectively. Moreover, α-enhancement and suppression showed different dynamics and contribution to perception. In contrast to the initial and transient dorsal α-enhancement, α-suppression in ventro-lateral occipital cortex was sustained and influenced subsequent target detection. This anticipatory biasing of ventrolateral extrastriate α-activity probably reflects increased receptivity in the brain region specialized in processing upcoming target features. Our results add to current models on the role of α-oscillations in attention orienting by showing that α-enhancement and suppression can be dissociated in time, space, and perceptual relevance.

    Additional information

    Capilla_Suppl_Data.pdf
  • Carota, F., & Sirigu, A. (2008). Neural Bases of Sequence Processing in Action and Language. Language Learning, 58(1), 179-199. doi:10.1111/j.1467-9922.2008.00470.x.

    Abstract

    Real-time estimation of what we will do next is a crucial prerequisite
    of purposive behavior. During the planning of goal-oriented actions, for
    instance, the temporal and causal organization of upcoming subsequent
    moves needs to be predicted based on our knowledge of events. A forward
    computation of sequential structure is also essential for planning
    contiguous discourse segments and syntactic patterns in language. The
    neural encoding of sequential event knowledge and its domain dependency
    is a central issue in cognitive neuroscience. Converging evidence shows
    the involvement of a dedicated neural substrate, including the
    prefrontal cortex and Broca's area, in the representation and the
    processing of sequential event structure. After reviewing major
    representational models of sequential mechanisms in action and language,
    we discuss relevant neuropsychological and neuroimaging findings on the
    temporal organization of sequencing and sequence processing in both
    domains, suggesting that sequential event knowledge may be modularly
    organized through prefrontal and frontal subregions.
  • Carter, D. M., Broersma, M., Donnelly, K., & Konopka, A. E. (2018). Presenting the Bangor autoglosser and the Bangor automated clause-splitter. Digital Scholarship in the Humanities, 33(1), 21-28. doi:10.1093/llc/fqw065.

    Abstract

    Until recently, corpus studies of natural bilingual speech and, more specifically, codeswitching in bilingual speech have used a manual method of glossing, partof- speech tagging, and clause-splitting to prepare the data for analysis. In our article, we present innovative tools developed for the first large-scale corpus study of codeswitching triggered by cognates. A study of this size was only possible due to the automation of several steps, such as morpheme-by-morpheme glossing, splitting complex clauses into simple clauses, and the analysis of internal and external codeswitching through the use of database tables, algorithms, and a scripting language.
  • Casasanto, D. (2008). Similarity and proximity: When does close in space mean close in mind? Memory & Cognition, 36(6), 1047-1056. doi:10.3758/MC.36.6.1047.

    Abstract

    People often describe things that are similar as close and things that are dissimilar as far apart. Does the way people talk about similarity reveal something fundamental about the way they conceptualize it? Three experiments tested the relationship between similarity and spatial proximity that is encoded in metaphors in language. Similarity ratings for pairs of words or pictures varied as a function of how far apart the stimuli appeared on the computer screen, but the influence of distance on similarity differed depending on the type of judgments the participants made. Stimuli presented closer together were rated more similar during conceptual judgments of abstract entities or unseen object properties but were rated less similar during perceptual judgments of visual appearance. These contrasting results underscore the importance of testing predictions based on linguistic metaphors experimentally and suggest that our sense of similarity arises from our ability to combine available perceptual information with stored knowledge of experiential regularities.
  • Casasanto, D. (2008). Who's afraid of the big bad Whorf? Crosslinguistic differences in temporal language and thought. Language Learning, 58(suppl. 1), 63-79. doi:10.1111/j.1467-9922.2008.00462.x.

    Abstract

    The idea that language shapes the way we think, often associated with Benjamin Whorf, has long been decried as not only wrong but also fundamentally wrong-headed. Yet, experimental evidence has reopened debate about the extent to which language influences nonlinguistic cognition, particularly in the domain of time. In this article, I will first analyze an influential argument against the Whorfian hypothesis and show that its anti-Whorfian conclusion is in part an artifact of conflating two distinct questions: Do we think in language? and Does language shape thought? Next, I will discuss crosslinguistic differences in spatial metaphors for time and describe experiments that demonstrate corresponding differences in nonlinguistic mental representations. Finally, I will sketch a simple learning mechanism by which some linguistic relativity effects appear to arise. Although people may not think in language, speakers of different languages develop distinctive conceptual repertoires as a consequence of ordinary and presumably universal neural and cognitive processes.
  • Casasanto, D., & Boroditsky, L. (2008). Time in the mind: Using space to think about time. Cognition, 106, 579-573. doi:10.1016/j.cognition.2007.03.004.

    Abstract

    How do we construct abstract ideas like justice, mathematics, or time-travel? In this paper we investigate whether mental representations that result from physical experience underlie people’s more abstract mental representations, using the domains of space and time as a testbed. People often talk about time using spatial language (e.g., a long vacation, a short concert). Do people also think about time using spatial representations, even when they are not using language? Results of six psychophysical experiments revealed that people are unable to ignore irrelevant spatial information when making judgments about duration, but not the converse. This pattern, which is predicted by the asymmetry between space and time in linguistic metaphors, was demonstrated here in tasks that do not involve any linguistic stimuli or responses. These findings provide evidence that the metaphorical relationship between space and time observed in language also exists in our more basic representations of distance and duration. Results suggest that our mental representations of things we can never see or touch may be built, in part, out of representations of physical experiences in perception and motor action.
  • Ceroni, F., Simpson, N. H., Francks, C., Baird, G., Conti-Ramsden, G., Clark, A., Bolton, P. F., Hennessy, E. R., Donnelly, P., Bentley, D. R., Martin, H., IMGSAC, SLI Consortium, WGS500 Consortium, Parr, J., Pagnamenta, A. T., Maestrini, E., Bacchelli, E., Fisher, S. E., & Newbury, D. F. (2014). Homozygous microdeletion of exon 5 in ZNF277 in a girl with specific language impairment. European Journal of Human Genetics, 22, 1165-1171. doi:10.1038/ejhg.2014.4.

    Abstract

    Specific language impairment (SLI), an unexpected failure to develop appropriate language skills despite adequate non-verbal intelligence, is a heterogeneous multifactorial disorder with a complex genetic basis. We identified a homozygous microdeletion of 21,379 bp in the ZNF277 gene (NM_021994.2), encompassing exon 5, in an individual with severe receptive and expressive language impairment. The microdeletion was not found in the proband’s affected sister or her brother who had mild language impairment. However, it was inherited from both parents, each of whom carries a heterozygous microdeletion and has a history of language problems. The microdeletion falls within the AUTS1 locus, a region linked to autistic spectrum disorders (ASDs). Moreover, ZNF277 is adjacent to the DOCK4 and IMMP2L genes, which have been implicated in ASD. We screened for the presence of ZNF277 microdeletions in cohorts of children with SLI or ASD and panels of control subjects. ZNF277 microdeletions were at an increased allelic frequency in SLI probands (1.1%) compared with both ASD family members (0.3%) and independent controls (0.4%). We performed quantitative RT-PCR analyses of the expression of IMMP2L, DOCK4 and ZNF277 in individuals carrying either an IMMP2L_DOCK4 microdeletion or a ZNF277 microdeletion. Although ZNF277 microdeletions reduce the expression of ZNF277, they do not alter the levels of DOCK4 or IMMP2L transcripts. Conversely, IMMP2L_DOCK4 microdeletions do not affect the expression levels of ZNF277. We postulate that ZNF277 microdeletions may contribute to the risk of language impairments in a manner that is independent of the autism risk loci previously described in this region.
  • Chan, A., Yang, W., Chang, F., & Kidd, E. (2018). Four-year-old Cantonese-speaking children's online processing of relative clauses: A permutation analysis. Journal of Child Language, 45(1), 174-203. doi:10.1017/s0305000917000198.

    Abstract


    We report on an eye-tracking study that investigated four-year-old Cantonese-speaking children's online processing of subject and object relative clauses (RCs). Children's eye-movements were recorded as they listened to RC structures identifying a unique referent (e.g. “Can you pick up the horse that pushed the pig?”). Two RC types, classifier (CL) and ge3 RCs, were tested in a between-participants design. The two RC types differ in their syntactic analyses and frequency of occurrence, providing an important point of comparison for theories of RC acquisition and processing. A permutation analysis showed that the two structures were processed differently: CL RCs showed a significant object-over-subject advantage, whereas ge3 RCs showed the opposite effect. This study shows that children can have different preferences even for two very similar RC structures within the same language, suggesting that syntactic processing preferences are shaped by the unique features of particular constructions both within and across different linguistic typologies.
  • Chen, C.-h., Zhang, Y., & Yu, C. (2018). Learning object names at different hierarchical levels using cross-situational statistics. Cognitive Science, 42(S2), 591-605. doi:10.1111/cogs.12516.

    Abstract

    Objects in the world usually have names at different hierarchical levels (e.g., beagle, dog, animal). This research investigates adults' ability to use cross-situational statistics to simultaneously learn object labels at individual and category levels. The results revealed that adults were able to use co-occurrence information to learn hierarchical labels in contexts where the labels for individual objects and labels for categories were presented in completely separated blocks, in interleaved blocks, or mixed in the same trial. Temporal presentation schedules significantly affected the learning of individual object labels, but not the learning of category labels. Learners' subsequent generalization of category labels indicated sensitivity to the structure of statistical input.
  • Chen, X. S., White, W. T. J., Collins, L. J., & Penny, D. (2008). Computational identification of four spliceosomal snRNAs from the deep-branch eukaryote Giardia intestinalis. PLoS One, 3(8), e3106. doi:10.1371/journal.pone.0003106.

    Abstract

    RNAs processing other RNAs is very general in eukaryotes, but is not clear to what extent it is ancestral to eukaryotes. Here we focus on pre-mRNA splicing, one of the most important RNA-processing mechanisms in eukaryotes. In most eukaryotes splicing is predominantly catalysed by the major spliceosome complex, which consists of five uridine-rich small nuclear RNAs (U-snRNAs) and over 200 proteins in humans. Three major spliceosomal introns have been found experimentally in Giardia; one Giardia U-snRNA (U5) and a number of spliceosomal proteins have also been identified. However, because of the low sequence similarity between the Giardia ncRNAs and those of other eukaryotes, the other U-snRNAs of Giardia had not been found. Using two computational methods, candidates for Giardia U1, U2, U4 and U6 snRNAs were identified in this study and shown by RT-PCR to be expressed. We found that identifying a U2 candidate helped identify U6 and U4 based on interactions between them. Secondary structural modelling of the Giardia U-snRNA candidates revealed typical features of eukaryotic U-snRNAs. We demonstrate a successful approach to combine computational and experimental methods to identify expected ncRNAs in a highly divergent protist genome. Our findings reinforce the conclusion that spliceosomal small-nuclear RNAs existed in the last common ancestor of eukaryotes.
  • Cho, T., & McQueen, J. M. (2008). Not all sounds in assimilation environments are perceived equally: Evidence from Korean. Journal of Phonetics, 36, 239-249. doi:doi:10.1016/j.wocn.2007.06.001.

    Abstract

    This study tests whether potential differences in the perceptual robustness of speech sounds influence continuous-speech processes. Two phoneme-monitoring experiments examined place assimilation in Korean. In Experiment 1, Koreans monitored for targets which were either labials (/p,m/) or alveolars (/t,n/), and which were either unassimilated or assimilated to a following /k/ in two-word utterances. Listeners detected unaltered (unassimilated) labials faster and more accurately than assimilated labials; there was no such advantage for unaltered alveolars. In Experiment 2, labial–velar differences were tested using conditions in which /k/ and /p/ were illegally assimilated to a following /t/. Unassimilated sounds were detected faster than illegally assimilated sounds, but this difference tended to be larger for /k/ than for /p/. These place-dependent asymmetries suggest that differences in the perceptual robustness of segments play a role in shaping phonological patterns.
  • Choi, J., Broersma, M., & Cutler, A. (2018). Phonetic learning is not enhanced by sequential exposure to more than one language. Linguistic Research, 35(3), 567-581. doi:10.17250/khisli.35.3.201812.006.

    Abstract

    Several studies have documented that international adoptees, who in early years have
    experienced a change from a language used in their birth country to a new language
    in an adoptive country, benefit from the limited early exposure to the birth language
    when relearning that language’s sounds later in life. The adoptees’ relearning advantages
    have been argued to be conferred by lasting birth-language knowledge obtained from
    the early exposure. However, it is also plausible to assume that the advantages may
    arise from adoptees’ superior ability to learn language sounds in general, as a result
    of their unusual linguistic experience, i.e., exposure to multiple languages in sequence
    early in life. If this is the case, then the adoptees’ relearning benefits should generalize
    to previously unheard language sounds, rather than be limited to their birth-language
    sounds. In the present study, adult Korean adoptees in the Netherlands and matched
    Dutch-native controls were trained on identifying a Japanese length distinction to which
    they had never been exposed before. The adoptees and Dutch controls did not differ
    on any test carried out before, during, or after the training, indicating that observed
    adoptee advantages for birth-language relearning do not generalize to novel, previously
    unheard language sounds. The finding thus fails to support the suggestion that
    birth-language relearning advantages may arise from enhanced ability to learn language
    sounds in general conferred by early experience in multiple languages. Rather, our
    finding supports the original contention that such advantages involve memory traces
    obtained before adoption
  • Chu, M., Meyer, A. S., Foulkes, L., & Kita, S. (2014). Individual differences in frequency and saliency of speech-accompanying gestures: The role of cognitive abilities and empathy. Journal of Experimental Psychology: General, 143, 694-709. doi:10.1037/a0033861.

    Abstract

    The present study concerns individual differences in gesture production. We used correlational and multiple regression analyses to examine the relationship between individuals’ cognitive abilities and empathy levels and their gesture frequency and saliency. We chose predictor variables according to experimental evidence of the functions of gesture in speech production and communication. We examined 3 types of gestures: representational gestures, conduit gestures, and palm-revealing gestures. Higher frequency of representational gestures was related to poorer visual and spatial working memory, spatial transformation ability, and conceptualization ability; higher frequency of conduit gestures was related to poorer visual working memory, conceptualization ability, and higher levels of empathy; and higher frequency of palm-revealing gestures was related to higher levels of empathy. The saliency of all gestures was positively related to level of empathy. These results demonstrate that cognitive abilities and empathy levels are related to individual differences in gesture frequency and saliency
  • Chu, M., & Kita, S. (2008). Spontaneous gestures during mental rotation tasks: Insights into the microdevelopment of the motor strategy. Journal of Experimental Psychology: General, 137, 706-723. doi:10.1037/a0013157.

    Abstract

    This study investigated the motor strategy involved in mental rotation tasks by examining 2 types of spontaneous gestures (hand–object interaction gestures, representing the agentive hand action on an object, vs. object-movement gestures, representing the movement of an object by itself) and different types of verbal descriptions of rotation. Hand–object interaction gestures were produced earlier than object-movement gestures, the rate of both types of gestures decreased, and gestures became more distant from the stimulus object over trials (Experiments 1 and 3). Furthermore, in the first few trials, object-movement gestures increased, whereas hand–object interaction gestures decreased, and this change of motor strategies was also reflected in the type of verbal description of rotation in the concurrent speech (Experiment 2). This change of motor strategies was hampered when gestures were prohibited (Experiment 4). The authors concluded that the motor strategy becomes less dependent on agentive action on the object, and also becomes internalized over the course of the experiment, and that gesture facilitates the former process. When solving a problem regarding the physical world, adults go through developmental processes similar to internalization and symbolic distancing in young children, albeit within a much shorter time span.
  • Chu, M., & Hagoort, P. (2014). Synchronization of speech and gesture: Evidence for interaction in action. Journal of Experimental Psychology: General, 143(4), 1726-1741. doi:10.1037/a0036281.

    Abstract

    Language and action systems are highly interlinked. A critical piece of evidence is that speech and its accompanying gestures are tightly synchronized. Five experiments were conducted to test 2 hypotheses about the synchronization of speech and gesture. According to the interactive view, there is continuous information exchange between the gesture and speech systems, during both their planning and execution phases. According to the ballistic view, information exchange occurs only during the planning phases of gesture and speech, but the 2 systems become independent once their execution has been initiated. In all experiments, participants were required to point to and/or name a light that had just lit up. Virtual reality and motion tracking technologies were used to disrupt their gesture or speech execution. Participants delayed their speech onset when their gesture was disrupted. They did so even when their gesture was disrupted at its late phase and even when they received only the kinesthetic feedback of their gesture. Also, participants prolonged their gestures when their speech was disrupted. These findings support the interactive view and add new constraints on models of speech and gesture production
  • Clahsen, H., Sonnenstuhl, I., Hadler, M., & Eisenbeiss, S. (2008). Morphological paradigms in language processing and language disorders. Transactions of the Philological Society, 99(2), 247-277. doi:10.1111/1467-968X.00082.

    Abstract

    We present results from two cross‐modal morphological priming experiments investigating regular person and number inflection on finite verbs in German. We found asymmetries in the priming patterns between different affixes that can be predicted from the structure of the paradigm. We also report data from language disorders which indicate that inflectional errors produced by language‐impaired adults and children tend to occur within a given paradigm dimension, rather than randomly across the paradigm. We conclude that morphological paradigms are used by the human language processor and can be systematically affected in language disorders.
  • Clough, S., & Hilverman, C. (2018). Hand gestures and how they help children learn. Frontiers for Young Minds, 6: 29. doi:10.3389/frym.2018.00029.

    Abstract

    When we talk, we often make hand movements called gestures at the same time. Although just about everyone gestures when they talk, we usually do not even notice the gestures. Our hand gestures play an important role in helping us learn and remember! When we see other people gesturing when they talk—or when we gesture when we talk ourselves—we are more likely to remember the information being talked about than if gestures were not involved. Our hand gestures can even indicate when we are ready to learn new things! In this article, we explain how gestures can help learning. To investigate this, we studied children learning a new mathematical concept called equivalence. We hope that this article will help you notice when you, your friends and family, and your teachers are gesturing, and that it will help you understand how those gestures can help people learn.
  • Cook, A. E., & Meyer, A. S. (2008). Capacity demands of phoneme selection in word production: New evidence from dual-task experiments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 886-899. doi:10.1037/0278-7393.34.4.886.

    Abstract

    Three dual-task experiments investigated the capacity demands of phoneme selection in picture naming. On each trial, participants named a target picture (Task 1) and carried out a tone discrimination task (Task 2). To vary the time required for phoneme selection, the authors combined the targets with phonologically related or unrelated distractor pictures (Experiment 1) or words, which were clearly visible (Experiment 2) or masked (Experiment 3). When pictures or masked words were presented, the tone discrimination and picture naming latencies were shorter in the related condition than in the unrelated condition, which indicates that phoneme selection requires central processing capacity. However, when the distractor words were clearly visible, the facilitatory effect was confined to the picture naming latencies. This pattern arose because the visible related distractor words facilitated phoneme selection but slowed down speech monitoring processes that had to be completed before the response to the tone could be selected.
  • Cooper, R. P., & Guest, O. (2014). Implementations are not specifications: Specification, replication and experimentation in computational cognitive modeling. Cognitive Systems Research, 27, 42-49. doi:10.1016/j.cogsys.2013.05.001.

    Abstract

    Contemporary methods of computational cognitive modeling have recently been criticized by Addyman and French (2012) on the grounds that they have not kept up with developments in computer technology and human–computer interaction. They present a manifesto for change according to which, it is argued, modelers should devote more effort to making their models accessible, both to non-modelers (with an appropriate easy-to-use user interface) and modelers alike. We agree that models, like data, should be freely available according to the normal standards of science, but caution against confusing implementations with specifications. Models may embody theories, but they generally also include implementation assumptions. Cognitive modeling methodology needs to be sensitive to this. We argue that specification, replication and experimentation are methodological approaches that can address this issue.
  • Corcoran, A. W., Alday, P. M., Schlesewsky, M., & Bornkessel-Schlesewsky, I. (2018). Toward a reliable, automated method of individual alpha frequency (IAF) quantification. Psychophysiology, 55(7): e13064. doi:10.1111/psyp.13064.

    Abstract

    Individual alpha frequency (IAF) is a promising electrophysiological marker of interindividual differences in cognitive function. IAF has been linked with trait-like differences in information processing and general intelligence, and provides an empirical basis for the definition of individualized frequency bands. Despite its widespread application, however, there is little consensus on the optimal method for estimating IAF, and many common approaches are prone to bias and inconsistency. Here, we describe an automated strategy for deriving two of the most prevalent IAF estimators in the literature: peak alpha frequency (PAF) and center of gravity (CoG). These indices are calculated from resting-state power spectra that have been smoothed using a Savitzky-Golay filter (SGF). We evaluate the performance characteristics of this analysis procedure in both empirical and simulated EEG data sets. Applying the SGF technique to resting-state data from n = 63 healthy adults furnished 61 PAF and 62 CoG estimates. The statistical properties of these estimates were consistent with previous reports. Simulation analyses revealed that the SGF routine was able to reliably extract target alpha components, even under relatively noisy spectral conditions. The routine consistently outperformed a simpler method of automated peak detection that did not involve spectral smoothing. The SGF technique is fast, open source, and available in two popular programming languages (MATLAB, Python), and thus can easily be integrated within the most popular M/EEG toolsets (EEGLAB, FieldTrip, MNE-Python). As such, it affords a convenient tool for improving the reliability and replicability of future IAF-related research.

    Additional information

    psyp13064-sup-0001-s01.docx
  • Corps, R. E., Gambi, C., & Pickering, M. J. (2018). Coordinating utterances during turn-taking: The role of prediction, response preparation, and articulation. Discourse processes, 55(2, SI), 230-240. doi:10.1080/0163853X.2017.1330031.

    Abstract

    During conversation, interlocutors rapidly switch between speaker and listener
    roles and take turns at talk. How do they achieve such fine coordination?
    Most research has concentrated on the role of prediction, but listeners
    must also prepare a response in advance (assuming they wish to respond)
    and articulate this response at the appropriate moment. Such mechanisms
    may overlap with the processes of comprehending the speaker’s incoming
    turn and predicting its end. However, little is known about the stages of
    response preparation and production. We discuss three questions pertaining
    to such stages: (1) Do listeners prepare their own response in advance?,
    (2) Can listeners buffer their prepared response?, and (3) Does buffering
    lead to interference with concurrent comprehension? We argue that fine
    coordination requires more than just an accurate prediction of the interlocutor’s
    incoming turn: Listeners must also simultaneously prepare their own
    response.
  • Corps, R. E., Crossley, A., Gambi, C., & Pickering, M. J. (2018). Early preparation during turn-taking: Listeners use content predictions to determine what to say but not when to say it. Cognition, 175, 77-95. doi:10.1016/j.cognition.2018.01.015.

    Abstract

    During conversation, there is often little gap between interlocutors’ utterances. In two pairs of experiments, we manipulated the content predictability of yes/no questions to investigate whether listeners achieve such coordination by (i) preparing a response as early as possible or (ii) predicting the end of the speaker’s turn. To assess these two mechanisms, we varied the participants’ task: They either pressed a button when they thought the question was about to end (Experiments 1a and 2a), or verbally answered the questions with either yes or no (Experiments 1b and 2b). Predictability effects were present when participants had to prepare a verbal response, but not when they had to predict the turn-end. These findings suggest content prediction facilitates turn-taking because it allows listeners to prepare their own response early, rather than because it helps them predict when the speaker will reach the end of their turn.

    Additional information

    Supplementary material
  • Cousijn, H., Eissing, M., Fernández, G., Fisher, S. E., Franke, B., Zwers, M., Harrison, P. J., & Arias-Vasquez, A. (2014). No effect of schizophrenia risk genes MIR137, TCF4, and ZNF804A on macroscopic brain structure. Schizophrenia Research, 159, 329-332. doi:10.1016/j.schres.2014.08.007.

    Abstract

    Single nucleotide polymorphisms (SNPs) within the MIR137, TCF4, and ZNF804A genes show genome-wide association to schizophrenia. However, the biological basis for the associations is unknown. Here, we tested the effects of these genes on brain structure in 1300 healthy adults. Using volumetry and voxel-based morphometry, neither gene-wide effects—including the combined effect of the genes—nor single SNP effects—including specific psychosis risk SNPs—were found on total brain volume, grey matter, white matter, or hippocampal volume. These results suggest that the associations between these risk genes and schizophrenia are unlikely to be mediated via effects on macroscopic brain structure.
  • Creemers, A., Don, J., & Fenger, P. (2018). Some affixes are roots, others are heads. Natural Language & Linguistic Theory, 36(1), 45-84. doi:10.1007/s11049-017-9372-1.

    Abstract

    A recent debate in the morphological literature concerns the status of derivational affixes. While some linguists (Marantz 1997, 2001; Marvin 2003) consider derivational affixes a type of functional morpheme that realizes a categorial head, others (Lowenstamm 2015; De Belder 2011) argue that derivational affixes are roots. Our proposal, which finds its empirical basis in a study of Dutch derivational affixes, takes a middle position. We argue that there are two types of derivational affixes: some that are roots (i.e. lexical morphemes) and others that are categorial heads (i.e. functional morphemes). Affixes that are roots show ‘flexible’ categorial behavior, are subject to ‘lexical’ phonological rules, and may trigger idiosyncratic meanings. Affixes that realize categorial heads, on the other hand, are categorially rigid, do not trigger ‘lexical’ phonological rules nor allow for idiosyncrasies in their interpretation.
  • Cristia, A. (2008). Cue weighting at different ages. Purdue Linguistics Association Working Papers, 1, 87-105.
  • Cristia, A., & Seidl, A. (2008). Is infants' learning of sound patterns constrained by phonological features? Language Learning and Development, 4, 203-227. doi:10.1080/15475440802143109.

    Abstract

    Phonological patterns in languages often involve groups of sounds rather than individual sounds, which may be explained if phonology operates on the abstract features shared by those groups (Troubetzkoy, 193957. Troubetzkoy , N. 1939/1969 . Principles of phonology , Berkeley : University of California Press . View all references/1969; Chomsky & Halle, 19688. Chomsky , N. and Halle , M. 1968 . The sound pattern of English , New York : Harper and Row . View all references). Such abstract features may be present in the developing grammar either because they are part of a Universal Grammar included in the genetic endowment of humans (e.g., Hale, Kissock and Reiss, 200618. Hale , M. , Kissock , M. and Reiss , C. 2006 . Microvariation, variation, and the features of universal grammar . Lingua , 32 : 402 – 420 . View all references), or plausibly because infants induce features from their linguistic experience (e.g., Mielke, 200438. Mielke , J. 2004 . The emergence of distinctive features , Ohio State University : Unpublished doctoral dissertation . View all references). A first experiment tested 7-month-old infants' learning of an artificial grammar pattern involving either a set of sounds defined by a phonological feature, or a set of sounds that cannot be described with a single feature—an “arbitrary” set. Infants were able to induce the constraint and generalize it to a novel sound only for the set that shared the phonological feature. A second study showed that infants' inability to learn the arbitrary grouping was not due to their inability to encode a constraint on some of the sounds involved.
  • Cristia, A., Minagawa-Kawai, Y., Egorova, N., Gervain, J., Filippin, L., Cabrol, D., & Dupoux, E. (2014). Neural correlates of infant accent discrimination: An fNIRS study. Developmental Science, 17(4), 628-635. doi:10.1111/desc.12160.

    Abstract

    The present study investigated the neural correlates of infant discrimination of very similar linguistic varieties (Quebecois and Parisian French) using functional Near InfraRed Spectroscopy. In line with previous behavioral and electrophysiological data, there was no evidence that 3-month-olds discriminated the two regional accents, whereas 5-month-olds did, with the locus of discrimination in left anterior perisylvian regions. These neuroimaging results suggest that a developing language network relying crucially on left perisylvian cortices sustains infants' discrimination of similar linguistic varieties within this early period of infancy.

    Files private

    Request files
  • Cristia, A., Seidl, A., Junge, C., Soderstrom, M., & Hagoort, P. (2014). Predicting individual variation in language from infant speech perception measures. Child development, 85(4), 1330-1345. doi:10.1111/cdev.12193.

    Abstract

    There are increasing reports that individual variation in behavioral and neurophysiological measures of infant speech processing predicts later language outcomes, and specifically concurrent or subsequent vocabulary size. If such findings are held up under scrutiny, they could both illuminate theoretical models of language development and contribute to the prediction of communicative disorders. A qualitative, systematic review of this emergent literature illustrated the variety of approaches that have been used and highlighted some conceptual problems regarding the measurements. A quantitative analysis of the same data established that the bivariate relation was significant, with correlations of similar strength to those found for well-established nonlinguistic predictors of language. Further exploration of infant speech perception predictors, particularly from a methodological perspective, is recommended.
  • Cristia, A., & Seidl, A. (2014). The hyperarticulation hypothesis of infant-directed speech. Journal of Child Language, 41(4), 913-934. doi:10.1017/S0305000912000669.

    Abstract

    Typically, the point vowels [i,ɑ,u] are acoustically more peripheral in infant-directed speech (IDS) compared to adult-directed speech (ADS). If caregivers seek to highlight lexically relevant contrasts in IDS, then two sounds that are contrastive should become more distinct, whereas two sounds that are surface realizations of the same underlying sound category should not. To test this prediction, vowels that are phonemically contrastive ([i-ɪ] and [eɪ-ε]), vowels that map onto the same underlying category ([æ- ] and [ε- ]), and the point vowels [i,ɑ,u] were elicited in IDS and ADS by American English mothers of two age groups of infants (four- and eleven-month-olds). As in other work, point vowels were produced in more peripheral positions in IDS compared to ADS. However, there was little evidence of hyperarticulation per se (e.g. [i-ɪ] was hypoarticulated). We suggest that across-the-board lexically based hyperarticulation is not a necessary feature of IDS.

    Additional information

    CORRIGENDUM
  • Cristia, A., & Seidl, A. (2008). Why cross-linguistic frequency cannot be equated with ease of acquisition. University of Pennsylvania Working Papers in Linguistics, 14(1), 71-82. Retrieved from http://repository.upenn.edu/pwpl/vol14/iss1/6.
  • Cronin, K. A., Pieper, B., Van Leeuwen, E. J. C., Mundry, R., & Haun, D. B. M. (2014). Problem solving in the presence of others: How rank and relationship quality impact resource acquisition in chimpanzees (Pan troglodytes). PLoS One, 9(4): e93204. doi:10.1371/journal.pone.0093204.

    Abstract

    In the wild, chimpanzees (Pan troglodytes) are often faced with clumped food resources that they may know how to access but abstain from doing so due to social pressures. To better understand how social settings influence resource acquisition, we tested fifteen semi-wild chimpanzees from two social groups alone and in the presence of others. We investigated how resource acquisition was affected by relative social dominance, whether collaborative problem solving or (active or passive) sharing occurred amongst any of the dyads, and whether these outcomes were related to relationship quality as determined from six months of observational data. Results indicated that chimpanzees, regardless of rank, obtained fewer rewards when tested in the presence of others compared to when they were tested alone. Chimpanzees demonstrated behavioral inhibition; chimpanzees who showed proficient skill when alone often abstained from solving the task when in the presence of others. Finally, individuals with close social relationships spent more time together in the problem solving space, but collaboration and sharing were infrequent and sessions in which collaboration or sharing did occur contained more instances of aggression. Group living provides benefits and imposes costs, and these findings highlight that one cost of group living may be diminishing productive individual behaviors.

Share this page