Publications

Displaying 301 - 400 of 563
  • Mazzone, M., & Campisi, E. (2013). Distributed intentionality: A model of intentional behavior in humans. Philosophical Psychology, 26, 267-290. doi:10.1080/09515089.2011.641743.

    Abstract

    Is human behavior, and more specifically linguistic behavior, intentional? Some scholars have proposed that action is driven in a top-down manner by one single intention—i.e.,one single conscious goal. Others have argued that actions are mostly non-intentional,insofar as often the single goal driving an action is not consciously represented. We intend to claim that both alternatives are unsatisfactory; more specifically, we claim that actions are intentional, but intentionality is distributed across complex goal-directed representations of action, rather than concentrated in single intentions driving action in a top-down manner. These complex representations encompass a multiplicity of goals, together with other components which are not goals themselves, and are the result of a largely automatic dynamic of activation; such an automatic processing, however, does not preclude the involvement of conscious attention, shifting from one component to the other of the overall goal-directed representation.

    Files private

    Request files
  • McGettigan, C., Eisner, F., Agnew, Z. K., Manly, T., Wisbey, D., & Scott, S. K. (2013). T'ain't what you say, it's the way that you say it—Left insula and inferior frontal cortex work in interaction with superior temporal regions to control the performance of vocal impersonations. Journal of Cognitive Neuroscience, 25(11), 1875-1886. doi:10.1162/jocn_a_00427.

    Abstract

    Historically, the study of human identity perception has focused on faces, but the voice is also central to our expressions and experiences of identity [Belin, P., Fecteau, S., & Bedard, C. Thinking the voice: Neural correlates of voice perception. Trends in Cognitive Sciences, 8, 129–135, 2004]. Our voices are highly flexible and dynamic; talkers speak differently, depending on their health, emotional state, and the social setting, as well as extrinsic factors such as background noise. However, to date, there have been no studies of the neural correlates of identity modulation in speech production. In the current fMRI experiment, we measured the neural activity supporting controlled voice change in adult participants performing spoken impressions. We reveal that deliberate modulation of vocal identity recruits the left anterior insula and inferior frontal gyrus, supporting the planning of novel articulations. Bilateral sites in posterior superior temporal/inferior parietal cortex and a region in right middle/anterior STS showed greater responses during the emulation of specific vocal identities than for impressions of generic accents. Using functional connectivity analyses, we describe roles for these three sites in their interactions with the brain regions supporting speech planning and production. Our findings mark a significant step toward understanding the neural control of vocal identity, with wider implications for the cognitive control of voluntary motor acts.
  • McKone, E., Wan, L., Pidcock, M., Crookes, K., Reynolds, K., Dawel, A., Kidd, E., & Fiorentini, C. (2019). A critical period for faces: Other-race face recognition is improved by childhood but not adult social contact. Scientific Reports, 9: 12820. doi:10.1038/s41598-019-49202-0.

    Abstract

    Poor recognition of other-race faces is ubiquitous around the world. We resolve a longstanding contradiction in the literature concerning whether interracial social contact improves the other-race effect. For the first time, we measure the age at which contact was experienced. taking advantage of
    unusual demographics allowing dissociation of childhood from adult contact, results show sufficient childhood contact eliminated poor other-race recognition altogether (confirming inter-country adoption
    studies). Critically, however, the developmental window for easy acquisition of other-race faces closed by approximately 12 years of age and social contact as an adult — even over several years and involving many other-race friends — produced no improvement. Theoretically, this pattern of developmental change in plasticity mirrors that found in language, suggesting a shared origin grounded in the
    functional importance of both skills to social communication. Practically, results imply that, where parents wish to ensure their offspring develop the perceptual skills needed to recognise other-race people easily, childhood experience should be encouraged: just as an English-speaking person who moves to France as a child (but not an adult) can easily become a native speaker of French, we can easily
    become “native recognisers” of other-race faces via natural social exposure obtained in childhood, but not later
  • Merkx, D., & Frank, S. L. (2019). Learning semantic sentence representations from visually grounded language without lexical knowledge. Natural Language Engineering, 25, 451-466. doi:10.1017/S1351324919000196.

    Abstract

    Current approaches to learning semantic representations of sentences often use prior word-level knowledge. The current study aims to leverage visual information in order to capture sentence level semantics without the need for word embeddings. We use a multimodal sentence encoder trained on a corpus of images with matching text captions to produce visually grounded sentence embeddings. Deep Neural Networks are trained to map the two modalities to a common embedding space such that for an image the corresponding caption can be retrieved and vice versa. We show that our model achieves results comparable to the current state of the art on two popular image-caption retrieval benchmark datasets: Microsoft Common Objects in Context (MSCOCO) and Flickr8k. We evaluate the semantic content of the resulting sentence embeddings using the data from the Semantic Textual Similarity (STS) benchmark task and show that the multimodal embeddings correlate well with human semantic similarity judgements. The system achieves state-of-the-art results on several of these benchmarks, which shows that a system trained solely on multimodal data, without assuming any word representations, is able to capture sentence level semantics. Importantly, this result shows that we do not need prior knowledge of lexical level semantics in order to model sentence level semantics. These findings demonstrate the importance of visual information in semantics.
  • Meyer, A. S., & Hagoort, P. (2013). What does it mean to predict one's own utterances? [Commentary on Pickering & Garrod]. Behavioral and Brain Sciences, 36, 367-368. doi:10.1017/S0140525X12002786.

    Abstract

    Many authors have recently highlighted the importance of prediction for language comprehension. Pickering & Garrod (P&G) are the first to propose a central role for prediction in language production. This is an intriguing idea, but it is not clear what it means for speakers to predict their own utterances, and how prediction during production can be empirically distinguished from production proper.
  • Meyer, A. S. (1990). The time course of phonological encoding in language production: The encoding of successive syllables of a word. Journal of Memory and Language, 29, 524-545. doi:10.1016/0749-596X(90)90050-A.

    Abstract

    A series of experiments was carried out investigating the time course of phonological encoding in language production, i.e., the question of whether all parts of the phonological form of a word are created in parallel, or whether they are created in a specific order. a speech production task was used in which the subjects in each test trial had to say one out of three or five response words as quickly as possible. In one condition, information was provided about part of the forms of the words to be uttered, in another condition this was not the case. The production of disyllabic words was speeded by information about their first syllable, but not by information about their second syllable. Experiments using trisyllabic words showed that a facilitatory effect could be obtained from information about the second syllable of the words, provided that the first syllable was also known. These findings suggest that the syllables of a word must be encoded strictly sequentially, according to their order in the word.
  • Meyer, A. S., Roelofs, A., & Brehm, L. (2019). Thirty years of Speaking: An introduction to the special issue. Language, Cognition and Neuroscience, 34(9), 1073-1084. doi:10.1080/23273798.2019.1652763.

    Abstract

    Thirty years ago, Pim Levelt published Speaking. During the 10th International Workshop on Language Production held at the Max Planck Institute for Psycholinguistics in Nijmegen in July 2018, researchers reflected on the impact of the book in the field, developments since its publication, and current research trends. The contributions in this Special Issue are closely related to the presentations given at the workshop. In this editorial, we sketch the research agenda set by Speaking, review how different aspects of this agenda are taken up in the papers in this volume and outline directions for further research.
  • Miceli, S., Negwer, M., van Eijs, F., Kalkhoven, C., van Lierop, I., Homberg, J., & Schubert, D. (2013). High serotonin levels during brain development alter the structural input-output connectivity of neural networks in the rat somatosensory layer IV. Frontiers in Cellular Neuroscience, 7: 88. doi:10.3389/fncel.2013.00088.

    Abstract

    Homeostatic regulation of serotonin (5-HT) concentration is critical for “normal” topographical organization and development of thalamocortical (TC) afferent circuits. Down-regulation of the serotonin transporter (SERT) and the consequent impaired reuptake of 5-HT at the synapse, results in a reduced terminal branching of developing TC afferents within the primary somatosensory cortex (S1). Despite the presence of multiple genetic models, the effect of high extracellular 5-HT levels on the structure and function of developing intracortical neural networks is far from being understood. Here, using juvenile SERT knockout (SERT−/−) rats we investigated, in vitro, the effect of increased 5-HT levels on the structural organization of (i) the TC projections of the ventroposteromedial thalamic nucleus toward S1, (ii) the general barrel-field pattern, and (iii) the electrophysiological and morphological properties of the excitatory cell population in layer IV of S1 [spiny stellate (SpSt) and pyramidal cells]. Our results confirmed previous findings that high levels of 5-HT during development lead to a reduction of the topographical precision of TCA projections toward the barrel cortex. Also, the barrel pattern was altered but not abolished in SERT−/− rats. In layer IV, both excitatory SpSt and pyramidal cells showed a significantly reduced intracolumnar organization of their axonal projections. In addition, the layer IV SpSt cells gave rise to a prominent projection toward the infragranular layer Vb. Our findings point to a structural and functional reorganization of TCAs, as well as early stage intracortical microcircuitry, following the disruption of 5-HT reuptake during critical developmental periods. The increased projection pattern of the layer IV neurons suggests that the intracortical network changes are not limited to the main entry layer IV but may also affect the subsequent stages of the canonical circuits of the barrel cortex.
  • Mickan, A., McQueen, J. M., & Lemhöfer, K. (2019). Bridging the gap between second language acquisition research and memory science: The case of foreign language attrition. Frontiers in Human Neuroscience, 13: 397. doi:10.3389/fnhum.2019.00397.

    Abstract

    The field of second language acquisition (SLA) is by nature of its subject a highly interdisciplinary area of research. Learning a (foreign) language, for example, involves encoding new words, consolidating and committing them to long-term memory, and later retrieving them. All of these processes have direct parallels in the domain of human memory and have been thoroughly studied by researchers in that field. Yet, despite these clear links, the two fields have largely developed in parallel and in isolation from one another. The present paper aims to promote more cross-talk between SLA and memory science. We focus on foreign language (FL) attrition as an example of a research topic in SLA where the parallels with memory science are especially apparent. We discuss evidence that suggests that competition between languages is one of the mechanisms of FL attrition, paralleling the interference process thought to underlie forgetting in other domains of human memory. Backed up by concrete suggestions, we advocate the use of paradigms from the memory literature to study these interference effects in the language domain. In doing so, we hope to facilitate future cross-talk between the two fields, and to further our understanding of FL attrition as a memory phenomenon.
  • Middeldorp, C. M., Felix, J. F., Mahajan, A., EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium, Early Growth Genetics (EGG) consortium, & McCarthy, M. I. (2019). The Early Growth Genetics (EGG) and EArly Genetics and Lifecourse Epidemiology (EAGLE) consortia: Design, results and future prospects. European Journal of Epidemiology, 34(3), 279-300. doi:10.1007/s10654-019-00502-9.

    Abstract

    The impact of many unfavorable childhood traits or diseases, such as low birth weight and mental disorders, is not limited to childhood and adolescence, as they are also associated with poor outcomes in adulthood, such as cardiovascular disease. Insight into the genetic etiology of childhood and adolescent traits and disorders may therefore provide new perspectives, not only on how to improve wellbeing during childhood, but also how to prevent later adverse outcomes. To achieve the sample sizes required for genetic research, the Early Growth Genetics (EGG) and EArly Genetics and Lifecourse Epidemiology (EAGLE) consortia were established. The majority of the participating cohorts are longitudinal population-based samples, but other cohorts with data on early childhood phenotypes are also involved. Cohorts often have a broad focus and collect(ed) data on various somatic and psychiatric traits as well as environmental factors. Genetic variants have been successfully identified for multiple traits, for example, birth weight, atopic dermatitis, childhood BMI, allergic sensitization, and pubertal growth. Furthermore, the results have shown that genetic factors also partly underlie the association with adult traits. As sample sizes are still increasing, it is expected that future analyses will identify additional variants. This, in combination with the development of innovative statistical methods, will provide detailed insight on the mechanisms underlying the transition from childhood to adult disorders. Both consortia welcome new collaborations. Policies and contact details are available from the corresponding authors of this manuscript and/or the consortium websites.
  • Minagawa-Kawai, Y., Cristia, A., Long, B., Vendelin, I., Hakuno, Y., Dutat, M., Filippin, L., Cabrol, D., & Dupoux, E. (2013). Insights on NIRS sensitivity from a cross-linguistic study on the emergence of phonological grammar. Frontiers in Psychology, 4: 170. doi:10.3389/fpsyg.2013.00170.

    Abstract

    Each language has a unique set of phonemic categories and phonotactic rules which determine permissible sound sequences in that language. Behavioral research demonstrates that one’s native language shapes the perception of both sound categories and sound sequences in adults, and neuroimaging results further indicate that the processing of native phonemes and phonotactics involves a left-dominant perisylvian brain network. Recent work using a novel technique, functional Near InfraRed Spectroscopy (NIRS), has suggested that a left-dominant network becomes evident toward the end of the first year of life as infants process phonemic contrasts. The present research project attempted to assess whether the same pattern would be seen for native phonotactics. We measured brain responses in Japanese- and French-learning infants to two contrasts: Abuna vs. Abna (a phonotactic contrast that is native in French, but not in Japanese) and Abuna vs. Abuuna (a vowel length contrast that is native in Japanese, but not in French). Results did not show a significant response to either contrast in either group, unlike both previous behavioral research on phonotactic processing and NIRS work on phonemic processing. To understand these null results, we performed similar NIRS experiments with Japanese adult participants. These data suggest that the infant null results arise from an interaction of multiple factors, involving the suitability of the experimental paradigm for NIRS measurements and stimulus perceptibility. We discuss the challenges facing this novel technique, particularly focusing on the optimal stimulus presentation which could yield strong enough hemodynamic responses when using the change detection paradigm.
  • Minutjukur, M., Tjitayi, K., Tjitayi, U., & Defina, R. (2019). Pitjantjatjara language change: Some observations and recommendations. Australian Aboriginal Studies, (1), 82-91.
  • Misersky, J., Majid, A., & Snijders, T. M. (2019). Grammatical gender in German influences how role-nouns are interpreted: Evidence from ERPs. Discourse Processes, 56(8), 643-654. doi:10.1080/0163853X.2018.1541382.

    Abstract

    Grammatically masculine role-nouns (e.g., Studenten-masc.‘students’) can refer to men and women, but may favor an interpretation where only men are considered the referent. If true, this has implications for a society aiming to achieve equal representation in the workplace since, for example, job adverts use such role descriptions. To investigate the interpretation of role-nouns, the present ERP study assessed grammatical gender processing in German. Twenty participants read sentences where a role-noun (masculine or feminine) introduced a group of people, followed by a congruent (masculine–men, feminine–women) or incongruent (masculine–women, feminine–men) continuation. Both for feminine-men and masculine-women continuations a P600 (500 to 800 ms) was observed; another positivity was already present from 300 to 500 ms for feminine-men continuations, but critically not for masculine-women continuations. The results imply a male-biased rather than gender-neutral interpretation of the masculine—despite widespread usage of the masculine as a gender-neutral form—suggesting masculine forms are inadequate for representing genders equally.
  • Mitterer, H., Kim, S., & Cho, T. (2013). Compensation for complete assimilation in speech perception: The case of Korean labial-to-velar assimilation. Journal of Memory and Language, 69, 59-83. doi:10.1016/j.jml.2013.02.001.

    Abstract

    In connected speech, phonological assimilation to neighboring words can lead to pronunciation variants (e.g., 'garden bench'→ "gardem bench"). A large body of literature suggests that listeners use the phonetic context to reconstruct the intended word for assimilation types that often lead to incomplete assimilations (e.g., a pronunciation of "garden" that carries cues for both a labial [m] and an alveolar [n]). In the current paper, we show that a similar context effect is observed for an assimilation that is often complete, Korean labial-to-velar place assimilation. In contrast to the context effects for partial assimilations, however, the context effects seem to rely completely on listeners' experience with the assimilation pattern in their native language.
  • Mitterer, H., & Russell, K. (2013). How phonological reductions sometimes help the listener. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 977-984. doi:10.1037/a0029196.

    Abstract

    In speech production, high-frequency words are more likely than low-frequency words to be phonologically reduced. We tested in an eye-tracking experiment whether listeners can make use of this correlation between lexical frequency and phonological realization of words. Participants heard prefixed verbs in which the prefix was either fully produced or reduced. Simultaneously, they saw a high-frequency verb and a low-frequency verb with this prefix-plus 2 distractors-on a computer screen. Participants were more likely to look at the high-frequency verb when they heard a reduced prefix than when they heard a fully produced prefix. Listeners hence exploit the correlation of lexical frequency and phonological reduction and assume that a reduced prefix is more likely to belong to a high-frequency word. This shows that reductions do not necessarily burden the listener but may in fact have a communicative function, in line with functional theories of phonology.
  • Mitterer, H., & Reinisch, E. (2013). No delays in application of perceptual learning in speech recognition: Evidence from eye tracking. Journal of Memory and Language, 69(4), 527-545. doi:10.1016/j.jml.2013.07.002.

    Abstract

    Three eye-tracking experiments tested at what processing stage lexically-guided retuning of a fricative contrast affects perception. One group of participants heard an ambiguous fricative between /s/ and /f/ replace /s/ in s-final words, the other group heard the same ambiguous fricative replacing /f/ in f-final words. In a test phase, both groups of participants heard a range of ambiguous fricatives at the end of Dutch minimal pairs (e.g., roos-roof, ‘rose’-‘robbery’). Participants who heard the ambiguous fricative replacing /f/ during exposure chose at test the f-final words more often than the other participants. During this test-phase, eye-tracking data showed that the effect of exposure exerted itself as soon as it could possibly have occurred, 200 ms after the onset of the fricative. This was at the same time as the onset of the effect of the fricative itself, showing that the perception of the fricative is changed by perceptual learning at an early level. Results converged in a time-window analysis and a Jackknife procedure testing the time at which effects reached a given proportion of their maxima. This indicates that perceptual learning affects early stages of speech processing, and supports the conclusion that perceptual learning is indeed perceptual rather than post-perceptual.

    Files private

    Request files
  • Mitterer, H., Scharenborg, O., & McQueen, J. M. (2013). Phonological abstraction without phonemes in speech perception. Cognition, 129, 356-361. doi:10.1016/j.cognition.2013.07.011.

    Abstract

    Recent evidence shows that listeners use abstract prelexical units in speech perception. Using the phenomenon of lexical retuning in speech processing, we ask whether those units are necessarily phonemic. Dutch listeners were exposed to a Dutch speaker producing ambiguous phones between the Dutch syllable-final allophones approximant [r] and dark [l]. These ambiguous phones replaced either final /r/ or final /l/ in words in a lexical-decision task. This differential exposure affected perception of ambiguous stimuli on the same allophone continuum in a subsequent phonetic-categorization test: Listeners exposed to ambiguous phones in /r/-final words were more likely to perceive test stimuli as /r/ than listeners with exposure in /l/-final words. This effect was not found for test stimuli on continua using other allophones of /r/ and /l/. These results confirm that listeners use phonological abstraction in speech perception. They also show that context-sensitive allophones can play a role in this process, and hence that context-insensitive phonemes are not necessary. We suggest there may be no one unit of perception
  • Mitterer, H., & Müsseler, J. M. (2013). Regional accent variation in the shadowing task: Evidence for a loose perception-action coupling in speech. Attention, Perception & Psychophysics, 75, 557-575. doi:10.3758/s13414-012-0407-8.

    Abstract

    We investigated the relation between action and perception in speech processing, using the shadowing task, in which participants repeat words they hear. In support of a tight perception–action link, previous work has shown that phonetic details in the stimulus influence the shadowing response. On the other hand, latencies do not seem to suffer if stimulus and response differ in their articulatory properties. The present investigation tested how perception influences production when participants are confronted with regional variation. Results showed that participants often imitate a regional variation if it occurs in the stimulus set but tend to stick to their variant if the stimuli are consistent. Participants were forced or induced to correct by the experimental instructions. Articulatory stimulus–response differences do not lead to latency costs. These data indicate that speech perception does not necessarily recruit the production system.
  • Moisik, S. R. (2013). Harsh voice quality and its association with blackness in popular American media. Phonetica, 4, 193-215. doi:10.1159/000351059.

    Abstract

    Performers use various laryngeal settings to create voices for characters and personas they portray. Although some research demonstrates the sociophonetic associations of laryngeal voice quality, few studies have documented or examined the role of harsh voice quality, particularly with vibration of the epilaryngeal structures (growling). This article qualitatively examines phonetic properties of vocal performances in a corpus of popular American media and evaluates the association of voice qualities in these performances with representations of social identity and stereotype. In several cases, contrasting laryngeal states create sociophonetic contrast, and harsh voice quality is paired with the portrayal of racial stereotypes of black people. These cases indicate exaggerated emotional states and are associated with yelling/shouting modes of expression. Overall, however, the functioning of harsh voice quality as it occurs in the data is broader and may involve aggressive posturing, comedic inversion of aggressiveness, vocal pathology, and vocal homage
  • Monaghan, P., & Fletcher, M. (2019). Do sound symbolism effects for written words relate to individual phonemes or to phoneme features? Language and Cognition, 11(2), 235-255. doi:10.1017/langcog.2019.20.

    Abstract

    The sound of words has been shown to relate to the meaning that the words denote, an effect that extends beyond morphological properties of the word. Studies of these sound-symbolic relations have described this iconicity in terms of individual phonemes, or alternatively due to acoustic properties (expressed in phonological features) relating to meaning. In this study, we investigated whether individual phonemes or phoneme features best accounted for iconicity effects. We tested 92 participants’ judgements about the appropriateness of 320 nonwords presented in written form, relating to 8 different semantic attributes. For all 8 attributes, individual phonemes fitted participants’ responses better than general phoneme features. These results challenge claims that sound-symbolic effects for visually presented words can access broad, cross-modal associations between sound and meaning, instead the results indicate the operation of individual phoneme to meaning relations. Whether similar effects are found for nonwords presented auditorially remains an open question.
  • Monaghan, P., & Roberts, S. G. (2019). Cognitive influences in language evolution: Psycholinguistic predictors of loan word borrowing. Cognition, 186, 147-158. doi:10.1016/j.cognition.2019.02.007.

    Abstract

    Languages change due to social, cultural, and cognitive influences. In this paper, we provide an assessment of these cognitive influences on diachronic change in the vocabulary. Previously, tests of stability and change of vocabulary items have been conducted on small sets of words where diachronic change is imputed from cladistics studies. Here, we show for a substantially larger set of words that stability and change in terms of documented borrowings of words into English and into Dutch can be predicted by psycholinguistic properties of words that reflect their representational fidelity. We found that grammatical category, word length, age of acquisition, and frequency predict borrowing rates, but frequency has a non-linear relationship. Frequency correlates negatively with probability of borrowing for high-frequency words, but positively for low-frequency words. This borrowing evidence documents recent, observable diachronic change in the vocabulary enabling us to distinguish between change associated with transmission during language acquisition and change due to innovations by proficient speakers.
  • Mongelli, V., Meijs, E. L., Van Gaal, S., & Hagoort, P. (2019). No language unification without neural feedback: How awareness affects sentence processing. Neuroimage, 202: 116063. doi:10.1016/j.neuroimage.2019.116063.

    Abstract

    How does the human brain combine a finite number of words to form an infinite variety of sentences? According to the Memory, Unification and Control (MUC) model, sentence processing requires long-range feedback from the left inferior frontal cortex (LIFC) to left posterior temporal cortex (LPTC). Single word processing however may only require feedforward propagation of semantic information from sensory regions to LPTC. Here we tested the claim that long-range feedback is required for sentence processing by reducing visual awareness of words using a masking technique. Masking disrupts feedback processing while leaving feedforward processing relatively intact. Previous studies have shown that masked single words still elicit an N400 ERP effect, a neural signature of semantic incongruency. However, whether multiple words can be combined to form a sentence under reduced levels of awareness is controversial. To investigate this issue, we performed two experiments in which we measured electroencephalography (EEG) while 40 subjects performed a masked priming task. Words were presented either successively or simultaneously, thereby forming a short sentence that could be congruent or incongruent with a target picture. This sentence condition was compared with a typical single word condition. In the masked condition we only found an N400 effect for single words, whereas in the unmasked condition we observed an N400 effect for both unmasked sentences and single words. Our findings suggest that long-range feedback processing is required for sentence processing, but not for single word processing.
  • Morgan, T. J. H., Acerbi, A., & Van Leeuwen, E. J. C. (2019). Copy-the-majority of instances or individuals? Two approaches to the majority and their consequences for conformist decision-making. PLoS One, 14(1): e021074. doi:10.1371/journal.pone.0210748.

    Abstract

    Cultural evolution is the product of the psychological mechanisms that underlie individual decision making. One commonly studied learning mechanism is a disproportionate preference for majority opinions, known as conformist transmission. While most theoretical and experimental work approaches the majority in terms of the number of individuals that perform a behaviour or hold a belief, some recent experimental studies approach the majority in terms of the number of instances a behaviour is performed. Here, we use a mathematical model to show that disagreement between these two notions of the majority can arise when behavioural variants are performed at different rates, with different salience or in different contexts (variant overrepresentation) and when a subset of the population act as demonstrators to the whole population (model biases). We also show that because conformist transmission changes the distribution of behaviours in a population, how observers approach the majority can cause populations to diverge, and that this can happen even when the two approaches to the majority agree with regards to which behaviour is in the majority. We discuss these results in light of existing findings, ranging from political extremism on twitter to studies of animal foraging behaviour. We conclude that the factors we considered (variant overrepresentation and model biases) are plausibly widespread. As such, it is important to understand how individuals approach the majority in order to understand the effects of majority influence in cultural evolution.
  • Mulder, K., Schreuder, R., & Dijkstra, T. (2013). Morphological family size effects in L1 and L2 processing: An electrophysiological study. Language and Cognitive Processes, 27, 1004-1035. doi:10.1080/01690965.2012.733013.

    Abstract

    The present study examined Morphological Family Size effects in first and second language processing. Items with a high or low Dutch (L1) Family Size were contrasted in four experiments involving Dutch–English bilinguals. In two experiments, reaction times (RTs) were collected in English (L2) and Dutch (L1) lexical decision tasks; in two other experiments, an L1 and L2 go/no-go lexical decision task were performed while Event-Related Potentials (ERPs) were recorded. Two questions were addressed. First, is the ERP signal sensitive to the morphological productivity of words? Second, does nontarget language activation in L2 processing spread beyond the item itself, to the morphological family of the activated nontarget word? The two behavioural experiments both showed a facilitatory effect of Dutch Family Size, indicating that the morphological family in the L1 is activated regardless of language context. In the two ERP experiments, Family Size effects were found to modulate the N400 component. Less negative waveforms were observed for words with a high L1 Family Size compared to words with a low L1 Family Size in the N400 time window, in both the L1 and L2 task. In addition, these Family Size effects persisted in later time windows. The data are discussed in light of the Morphological Family Resonance Model (MFRM) model of morphological processing and the BIA + model.
  • Nakamoto, T., Suei, Y., Konishi, M., Kanda, T., Verdonschot, R. G., & Kakimoto, N. (2019). Abnormal positioning of the common carotid artery clinically diagnosed as a submandibular mass. Oral Radiology, 35(3), 331-334. doi:10.1007/s11282-018-0355-7.

    Abstract

    The common carotid artery (CCA) usually runs along the long axis of the neck, although it is occasionally found in an abnormal position or is displaced. We report a case of an 86-year-old woman in whom the CCA was identified in the submandibular area. The patient visited our clinic and reported soft tissue swelling in the right submandibular area. It resembled a tumor mass or a swollen lymph node. Computed tomography showed that it was the right CCA that had been bent forward and was running along the submandibular subcutaneous area. Ultrasonography verified the diagnosis. No other lesions were found on the diagnostic images. Consequently, the patient was diagnosed as having abnormal CCA positioning. Although this condition generally requires no treatment, it is important to follow-up the abnormality with diagnostic imaging because of the risk of cerebrovascular disorders.
  • Nakamoto, T., Taguchi, A., Verdonschot, R. G., & Kakimoto, N. (2019). Improvement of region of interest extraction and scanning method of computer-aided diagnosis system for osteoporosis using panoramic radiographs. Oral Radiology, 35(2), 143-151. doi:10.1007/s11282-018-0330-3.

    Abstract

    ObjectivesPatients undergoing osteoporosis treatment benefit greatly from early detection. We previously developed a computer-aided diagnosis (CAD) system to identify osteoporosis using panoramic radiographs. However, the region of interest (ROI) was relatively small, and the method to select suitable ROIs was labor-intensive. This study aimed to expand the ROI and perform semi-automatized extraction of ROIs. The diagnostic performance and operating time were also assessed.MethodsWe used panoramic radiographs and skeletal bone mineral density data of 200 postmenopausal women. Using the reference point that we defined by averaging 100 panoramic images as the lower mandibular border under the mental foramen, a 400x100-pixel ROI was automatically extracted and divided into four 100x100-pixel blocks. Valid blocks were analyzed using program 1, which examined each block separately, and program 2, which divided the blocks into smaller segments and performed scans/analyses across blocks. Diagnostic performance was evaluated using another set of 100 panoramic images.ResultsMost ROIs (97.0%) were correctly extracted. The operation time decreased to 51.4% for program 1 and to 69.3% for program 2. The sensitivity, specificity, and accuracy for identifying osteoporosis were 84.0, 68.0, and 72.0% for program 1 and 92.0, 62.7, and 70.0% for program 2, respectively. Compared with the previous conventional system, program 2 recorded a slightly higher sensitivity, although it occasionally also elicited false positives.ConclusionsPatients at risk for osteoporosis can be identified more rapidly using this new CAD system, which may contribute to earlier detection and intervention and improved medical care.
  • Nayernia, L., Van den Vijver, R., & Indefrey, P. (2019). The influence of orthography on phonemic knowledge: An experimental investigation on German and Persian. Journal of Psycholinguistic Research, 48(6), 1391-1406. doi:10.1007/s10936-019-09664-9.

    Abstract

    This study investigated whether the phonological representation of a word is modulated by its orthographic representation in case of a mismatch between the two representations. Such a mismatch is found in Persian, where short vowels are represented phonemically but not orthographically. Persian adult literates, Persian adult illiterates, and German adult literates were presented with two auditory tasks, an AX-discrimination task and a reversal task. We assumed that if orthographic representations influence phonological representations, Persian literates should perform worse than Persian illiterates or German literates on items with short vowels in these tasks. The results of the discrimination tasks showed that Persian literates and illiterates as well as German literates were approximately equally competent in discriminating short vowels in Persian words and pseudowords. Persian literates did not well discriminate German words containing phonemes that differed only in vowel length. German literates performed relatively poorly in discriminating German homographic words that differed only in vowel length. Persian illiterates were unable to perform the reversal task in Persian. The results of the other two participant groups in the reversal task showed the predicted poorer performance of Persian literates on Persian items containing short vowels compared to items containing long vowels only. German literates did not show this effect in German. Our results suggest two distinct effects of orthography on phonemic representations: whereas the lack of orthographic representations seems to affect phonemic awareness, homography seems to affect the discriminability of phonemic representations.
  • Nazzi, T., & Cutler, A. (2019). How consonants and vowels shape spoken-language recognition. Annual Review of Linguistics, 5, 25-47. doi:10.1146/annurev-linguistics-011718-011919.

    Abstract

    All languages instantiate a consonant/vowel contrast. This contrast has processing consequences at different levels of spoken-language recognition throughout the lifespan. In adulthood, lexical processing is more strongly associated with consonant than with vowel processing; this has been demonstrated across 13 languages from seven language families and in a variety of auditory lexical-level tasks (deciding whether a spoken input is a word, spotting a real word embedded in a minimal context, reconstructing a word minimally altered into a pseudoword, learning new words or the “words” of a made-up language), as well as in written-word tasks involving phonological processing. In infancy, a consonant advantage in word learning and recognition is found to emerge during development in some languages, though possibly not in others, revealing that the stronger lexicon–consonant association found in adulthood is learned. Current research is evaluating the relative contribution of the early acquisition of the acoustic/phonetic and lexical properties of the native language in the emergence of this association
  • Nettle, D., Cronin, K. A., & Bateson, M. (2013). Responses of chimpanzees to cues of conspecific observation. Animal Behaviour, 86(3), 595-602. doi:10.1016/j.anbehav.2013.06.015.

    Abstract

    Recent evidence has shown that humans are remarkably sensitive to artificial cues of conspecific observation when making decisions with potential social consequences. Whether similar effects are found in other great apes has not yet been investigated. We carried out two experiments in which individual chimpanzees, Pan troglodytes, took items of food from an array in the presence of either an image of a large conspecific face or a scrambled control image. In experiment 1 we compared three versions of the face image varying in size and the amount of the face displayed. In experiment 2 we compared a fourth variant of the image with more prominent coloured eyes displayed closer to the focal chimpanzee. The chimpanzees did not look at the face images significantly more than at the control images in either experiment. Although there were trends for some individuals in each experiment to be slower to take high-value food items in the face conditions, these were not consistent or robust. We suggest that the extreme human sensitivity to cues of potential conspecific observation may not be shared with chimpanzees.
  • Newbury, D. F., Mari, F., Akha, E. S., MacDermot, K. D., Canitano, R., Monaco, A. P., Taylor, J. C., Renieri, A., Fisher, S. E., & Knight, S. J. L. (2013). Dual copy number variants involving 16p11 and 6q22 in a case of childhood apraxia of speech and pervasive developmental disorder. European Journal of Human Genetics, 21, 361-365. doi:10.1038/ejhg.2012.166.

    Abstract

    In this issue, Raca et al1 present two cases of childhood apraxia of speech (CAS) arising from microdeletions of chromosome 16p11.2. They propose that comprehensive phenotypic profiling may assist in the delineation and classification of such cases. To complement this study, we would like to report on a third, unrelated, child who presents with CAS and a chromosome 16p11.2 heterozygous deletion. We use genetic data from this child and his family to illustrate how comprehensive genetic profiling may also assist in the characterisation of 16p11.2 microdeletion syndrome.
  • Niermann, H. C. M., Tyborowska, A., Cillessen, A. H. N., Van Donkelaar, M. M. J., Lammertink, F., Gunnar, M. R., Franke, B., Figner, B., & Roelofs, K. (2019). The relation between infant freezing and the development of internalizing symptoms in adolescence: A prospective longitudinal study. Developmental Science, 22(3): e12763. doi:10.1111/desc.12763.

    Abstract

    Given the long-lasting detrimental effects of internalizing symptoms, there is great need for detecting early risk markers. One promising marker is freezing behavior. Whereas initial freezing reactions are essential for coping with threat, prolonged freezing has been associated with internalizing psychopathology. However, it remains unknown whether early life alterations in freezing reactions predict changes in internalizing symptoms during adolescent development. In a longitudinal study (N = 116), we tested prospectively whether observed freezing in infancy predicted the development of internalizing symptoms from childhood through late adolescence (until age 17). Both longer and absent infant freezing behavior during a standard challenge (robot-confrontation task) were associated with internalizing symptoms in adolescence. Specifically, absent infant freezing predicted a relative increase in internalizing symptoms consistently across development from relatively low symptom levels in childhood to relatively high levels in late adolescence. Longer infant freezing also predicted a relative increase in internalizing symptoms, but only up until early adolescence. This latter effect was moderated by peer stress and was followed by a later decrease in internalizing symptoms. The findings suggest that early deviations in defensive freezing responses signal risk for internalizing symptoms and may constitute important markers in future stress vulnerability and resilience studies.
  • Nieuwenhuis, I. L., Folia, V., Forkstam, C., Jensen, O., & Petersson, K. M. (2013). Sleep promotes the extraction of grammatical rules. PLoS One, 8(6): e65046. doi:10.1371/journal.pone.0065046.

    Abstract

    Grammar acquisition is a high level cognitive function that requires the extraction of complex rules. While it has been proposed that offline time might benefit this type of rule extraction, this remains to be tested. Here, we addressed this question using an artificial grammar learning paradigm. During a short-term memory cover task, eighty-one human participants were exposed to letter sequences generated according to an unknown artificial grammar. Following a time delay of 15 min, 12 h (wake or sleep) or 24 h, participants classified novel test sequences as Grammatical or Non-Grammatical. Previous behavioral and functional neuroimaging work has shown that classification can be guided by two distinct underlying processes: (1) the holistic abstraction of the underlying grammar rules and (2) the detection of sequence chunks that appear at varying frequencies during exposure. Here, we show that classification performance improved after sleep. Moreover, this improvement was due to an enhancement of rule abstraction, while the effect of chunk frequency was unaltered by sleep. These findings suggest that sleep plays a critical role in extracting complex structure from separate but related items during integrative memory processing. Our findings stress the importance of alternating periods of learning with sleep in settings in which complex information must be acquired.
  • Nieuwland, M. S., Coopmans, C. W., & Sommers, R. P. (2019). Distinguishing old from new referents during discourse comprehension: Evidence from ERPs and oscillations. Frontiers in Human Neuroscience, 13: 398. doi:10.3389/fnhum.2019.00398.

    Abstract

    In this EEG study, we used pre-registered and exploratory ERP and time-frequency analyses to investigate the resolution of anaphoric and non-anaphoric noun phrases during discourse comprehension. Participants listened to story contexts that described two antecedents, and subsequently read a target sentence with a critical noun phrase that lexically matched one antecedent (‘old’), matched two antecedents (‘ambiguous’), partially matched one antecedent in terms of semantic features (‘partial-match’), or introduced another referent (non-anaphoric, ‘new’). After each target sentence, participants judged whether the noun referred back to an antecedent (i.e., an ‘old/new’ judgment), which was easiest for ambiguous nouns and hardest for partially matching nouns. The noun-elicited N400 ERP component demonstrated initial sensitivity to repetition and semantic overlap, corresponding to repetition and semantic priming effects, respectively. New and partially matching nouns both elicited a subsequent frontal positivity, which suggested that partially matching anaphors may have been processed as new nouns temporarily. ERPs in an even later time window and ERPs time-locked to sentence-final words suggested that new and partially matching nouns had different effects on comprehension, with partially matching nouns incurring additional processing costs up to the end of the sentence. In contrast to the ERP results, the time-frequency results primarily demonstrated sensitivity to noun repetition, and did not differentiate partially matching anaphors from new nouns. In sum, our results show the ERP and time-frequency effects of referent repetition during discourse comprehension, and demonstrate the potentially demanding nature of establishing the anaphoric meaning of a novel noun.
  • Nieuwland, M. S. (2019). Do ‘early’ brain responses reveal word form prediction during language comprehension? A critical review. Neuroscience and Biobehavioral Reviews, 96, 367-400. doi:10.1016/j.neubiorev.2018.11.019.

    Abstract

    Current theories of language comprehension posit that readers and listeners routinely try to predict the meaning but also the visual or sound form of upcoming words. Whereas
    most neuroimaging studies on word rediction focus on the N400 ERP or its magnetic equivalent, various studies claim that word form prediction manifests itself in ‘early’, pre
    N400 brain responses (e.g., ELAN, M100, P130, N1, P2, N200/PMN, N250). Modulations of these components are often taken as evidence that word form prediction impacts early sensory processes (the sensory hypothesis) or, alternatively, the initial stages of word recognition before word meaning is integrated with sentence context (the recognition hypothesis). Here, I
    comprehensively review studies on sentence- or discourse-level language comprehension that report such effects of prediction on early brain responses. I conclude that the reported evidence for the sensory hypothesis or word recognition hypothesis is weak and inconsistent,
    and highlight the urgent need for replication of previous findings. I discuss the implications and challenges to current theories of linguistic prediction and suggest avenues for future research.
  • Nieuwland, M. S. (2013). “If a lion could speak …”: Online sensitivity to propositional truth-value of unrealistic counterfactual sentences. Journal of Memory and Language, 68(1), 54-67. doi:10.1016/j.jml.2012.08.003.

    Abstract

    People can establish whether a sentence is hypothetically true even if what it describes can never be literally true given the laws of the natural world. Two event-related potential (ERP) experiments examined electrophysiological responses to sentences about unrealistic counterfactual worlds that require people to construct novel conceptual combinations and infer their consequences as the sentence unfolds in time (e.g., “If dogs had gills…”). Experiment 1 established that without this premise, described consequences (e.g., “Dobermans would breathe under water …”) elicited larger N400 responses than real-world true sentences. Incorporation of the counterfactual premise in Experiment 2 generated similar N400 effects of propositional truth-value in counterfactual and real-world sentences, suggesting that the counterfactual context eliminated the interpretive problems posed by locally anomalous sentences. This result did not depend on cloze probability of the sentences. In contrast to earlier findings regarding online comprehension of logical operators and counterfactuals, these results show that ongoing processing can be directly impacted by propositional truth-value, even that of unrealistic counterfactuals.
  • Nieuwland, M. S., Martin, A. E., & Carreiras, M. (2013). Event-related brain potential evidence for animacy processing asymmetries during sentence comprehension. Brain and Language, 126(2), 151-158. doi:10.1016/j.bandl.2013.04.005.

    Abstract

    The animacy distinction is deeply rooted in the language faculty. A key example is differential object marking, the phenomenon where animate sentential objects receive specific marking. We used event-related potentials to examine the neural processing consequences of case-marking violations on animate and inanimate direct objects in Spanish. Inanimate objects with incorrect prepositional case marker ‘a’ (‘al suelo’) elicited a P600 effect compared to unmarked objects, consistent with previous literature. However, animate objects without the required prepositional case marker (‘el obispo’) only elicited an N400 effect compared to marked objects. This novel finding, an exclusive N400 modulation by a straightforward grammatical rule violation, does not follow from extant neurocognitive models of sentence processing, and mirrors unexpected “semantic P600” effects for thematically problematic sentences. These results may reflect animacy asymmetry in competition for argument prominence: following the article, thematic interpretation difficulties are elicited only by unexpectedly animate objects.
  • Nievergelt, C. M., Maihofer, A. X., Klengel, T., Atkinson, E. G., Chen, C.-Y., Choi, K. W., Coleman, J. R. I., Dalvie, S., Duncan, L. E., Gelernter, J., Levey, D. F., Logue, M. W., Polimanti, R., Provost, A. C., Ratanatharathorn, A., Stein, M. B., Torres, K., Aiello, A. E., Almli, L. M., Amstadter, A. B. and 159 moreNievergelt, C. M., Maihofer, A. X., Klengel, T., Atkinson, E. G., Chen, C.-Y., Choi, K. W., Coleman, J. R. I., Dalvie, S., Duncan, L. E., Gelernter, J., Levey, D. F., Logue, M. W., Polimanti, R., Provost, A. C., Ratanatharathorn, A., Stein, M. B., Torres, K., Aiello, A. E., Almli, L. M., Amstadter, A. B., Andersen, S. B., Andreassen, O. A., Arbisi, P. A., Ashley-Koch, A. E., Austin, S. B., Avdibegovic, E., Babić, D., Bækvad-Hansen, M., Baker, D. G., Beckham, J. C., Bierut, L. J., Bisson, J. I., Boks, M. P., Bolger, E. A., Børglum, A. D., Bradley, B., Brashear, M., Breen, G., Bryant, R. A., Bustamante, A. C., Bybjerg-Grauholm, J., Calabrese, J. R., Caldas- de- Almeida, J. M., Dale, A. M., Daly, M. J., Daskalakis, N. P., Deckert, J., Delahanty, D. L., Dennis, M. F., Disner, S. G., Domschke, K., Dzubur-Kulenovic, A., Erbes, C. R., Evans, A., Farrer, L. A., Feeny, N. C., Flory, J. D., Forbes, D., Franz, C. E., Galea, S., Garrett, M. E., Gelaye, B., Geuze, E., Gillespie, C., Uka, A. G., Gordon, S. D., Guffanti, G., Hammamieh, R., Harnal, S., Hauser, M. A., Heath, A. C., Hemmings, S. M. J., Hougaard, D. M., Jakovljevic, M., Jett, M., Johnson, E. O., Jones, I., Jovanovic, T., Qin, X.-J., Junglen, A. G., Karstoft, K.-I., Kaufman, M. L., Kessler, R. C., Khan, A., Kimbrel, N. A., King, A. P., Koen, N., Kranzler, H. R., Kremen, W. S., Lawford, B. R., Lebois, L. A. M., Lewis, C. E., Linnstaedt, S. D., Lori, A., Lugonja, B., Luykx, J. J., Lyons, M. J., Maples-Keller, J., Marmar, C., Martin, A. R., Martin, N. G., Maurer, D., Mavissakalian, M. R., McFarlane, A., McGlinchey, R. E., McLaughlin, K. A., McLean, S. A., McLeay, S., Mehta, D., Milberg, W. P., Miller, M. W., Morey, R. A., Morris, C. P., Mors, O., Mortensen, P. B., Neale, B. M., Nelson, E. C., Nordentoft, M., Norman, S. B., O’Donnell, M., Orcutt, H. K., Panizzon, M. S., Peters, E. S., Peterson, A. L., Peverill, M., Pietrzak, R. H., Polusny, M. A., Rice, J. P., Ripke, S., Risbrough, V. B., Roberts, A. L., Rothbaum, A. O., Rothbaum, B. O., Roy-Byrne, P., Ruggiero, K., Rung, A., Rutten, B. P. F., Saccone, N. L., Sanchez, S. E., Schijven, D., Seedat, S., Seligowski, A. V., Seng, J. S., Sheerin, C. M., Silove, D., Smith, A. K., Smoller, J. W., Sponheim, S. R., Stein, D. J., Stevens, J. S., Sumner, J. A., Teicher, M. H., Thompson, W. K., Trapido, E., Uddin, M., Ursano, R. J., van den Heuvel, L. L., Van Hooff, M., Vermetten, E., Vinkers, C. H., Voisey, J., Wang, Y., Wang, Z., Werge, T., Williams, M. A., Williamson, D. E., Winternitz, S., Wolf, C., Wolf, E. J., Wolff, J. D., Yehuda, R., Young, R. M., Young, K. A., Zhao, H., Zoellner, L. A., Liberzon, I., Ressler, K. J., Haas, M., & Koenen, K. C. (2019). International meta-analysis of PTSD genome-wide association studies identifies sex- and ancestry-specific genetic risk loci. Nature Communications, 10(1): 4558. doi:10.1038/s41467-019-12576-w.

    Abstract

    The risk of posttraumatic stress disorder (PTSD) following trauma is heritable, but robust common variants have yet to be identified. In a multi-ethnic cohort including over 30,000 PTSD cases and 170,000 controls we conduct a genome-wide association study of PTSD. We demonstrate SNP-based heritability estimates of 5–20%, varying by sex. Three genome-wide significant loci are identified, 2 in European and 1 in African-ancestry analyses. Analyses stratified by sex implicate 3 additional loci in men. Along with other novel genes and non-coding RNAs, a Parkinson’s disease gene involved in dopamine regulation, PARK2, is associated with PTSD. Finally, we demonstrate that polygenic risk for PTSD is significantly predictive of re-experiencing symptoms in the Million Veteran Program dataset, although specific loci did not replicate. These results demonstrate the role of genetic variation in the biology of risk for PTSD and highlight the necessity of conducting sex-stratified analyses and expanding GWAS beyond European ancestry populations.

    Additional information

    Supplementary information
  • Noble, C., Sala, G., Peter, M., Lingwood, J., Rowland, C. F., Gobet, F., & Pine, J. (2019). The impact of shared book reading on children's language skills: A meta-analysis. Educational Research Review, 28: 100290. doi:10.1016/j.edurev.2019.100290.

    Abstract

    Shared book reading is thought to have a positive impact on young children's language development, with shared reading interventions often run in an attempt to boost children's language skills. However, despite the volume of research in this area, a number of issues remain outstanding. The current meta-analysis explored whether shared reading interventions are equally effective (a) across a range of study designs; (b) across a range of different outcome variables; and (c) for children from different SES groups. It also explored the potentially moderating effects of intervention duration, child age, use of dialogic reading techniques, person delivering the intervention and mode of intervention delivery.

    Our results show that, while there is an effect of shared reading on language development, this effect is smaller than reported in previous meta-analyses (
     = 0.194, p = .002). They also show that this effect is moderated by the type of control group used and is negligible in studies with active control groups (  = 0.028, p = .703). Finally, they show no significant effects of differences in outcome variable (ps ≥ .286), socio-economic status (p = .658), or any of our other potential moderators (ps ≥ .077), and non-significant effects for studies with follow-ups (  = 0.139, p = .200). On the basis of these results, we make a number of recommendations for researchers and educators about the design and implementation of future shared reading interventions.

    Additional information

    Supplementary data
  • Nomi, J. S., Frances, C., Nguyen, M. T., Bastidas, S., & Troup, L. J. (2013). Interaction of threat expressions and eye gaze: an event-related potential study. NeuroReport, 24, 813-817. doi:10.1097/WNR.0b013e3283647682.

    Abstract

    he current study examined the interaction of fearful, angry,
    happy, and neutral expressions with left, straight, and
    right eye gaze directions. Human participants viewed
    faces consisting of various expression and eye gaze
    combinations while event-related potential (ERP) data
    were collected. The results showed that angry expressions
    modulated the mean amplitude of the P1, whereas fearful
    and happy expressions modulated the mean amplitude of
    the N170. No influence of eye gaze on mean amplitudes for
    the P1 and N170 emerged. Fearful, angry, and happy
    expressions began to interact with eye gaze to influence
    mean amplitudes in the time window of 200–400 ms.
    The results suggest early processing of expression
    influence ERPs independent of eye gaze, whereas
    expression and gaze interact to influence later
    ERPs.
  • Nuthmann, A., De Groot, F., Huettig, F., & Olivers, C. L. N. (2019). Extrafoveal attentional capture by object semantics. PLoS One, 14(5): e0217051. doi:10.1371/journal.pone.0217051.

    Abstract

    There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.
  • O’Meara, C., Kung, S. S., & Majid, A. (2019). The challenge of olfactory ideophones: Reconsidering ineffability from the Totonac-Tepehua perspective. International Journal of American Linguistics, 85(2), 173-212. doi:10.1086/701801.

    Abstract

    Olfactory impressions are said to be ineffable, but little systematic exploration has been done to substantiate this. We explored olfactory language in Huehuetla Tepehua—a Totonac-Tepehua language spoken in Hidalgo, Mexico—which has a large inventory of ideophones, words with sound-symbolic properties used to describe perceptuomotor experiences. A multi-method study found Huehuetla Tepehua has 45 olfactory ideophones, illustrating intriguing sound-symbolic alternation patterns. Elaboration in the olfactory domain is not unique to this language; related Totonac-Tepehua languages also have impressive smell lexicons. Comparison across these languages shows olfactory and gustatory terms overlap in interesting ways, mirroring the physiology of smelling and tasting. However, although cognate taste terms are formally similar, olfactory terms are less so. We suggest the relative instability of smell vocabulary in comparison with those of taste likely results from the more varied olfactory experiences caused by the mutability of smells in different environments.
  • Ortega, G., Schiefner, A., & Ozyurek, A. (2019). Hearing non-signers use their gestures to predict iconic form-meaning mappings at first exposure to sign. Cognition, 191: 103996. doi:10.1016/j.cognition.2019.06.008.

    Abstract

    The sign languages of deaf communities and the gestures produced by hearing people are communicative systems that exploit the manual-visual modality as means of expression. Despite their striking differences they share the property of iconicity, understood as the direct relationship between a symbol and its referent. Here we investigate whether non-signing hearing adults exploit their implicit knowledge of gestures to bootstrap accurate understanding of the meaning of iconic signs they have never seen before. In Study 1 we show that for some concepts gestures exhibit systematic forms across participants, and share different degrees of form overlap with the signs for the same concepts (full, partial, and no overlap). In Study 2 we found that signs with stronger resemblance with signs are more accurately guessed and are assigned higher iconicity ratings by non-signers than signs with low overlap. In addition, when more people produced a systematic gesture resembling a sign, they assigned higher iconicity ratings to that sign. Furthermore, participants had a bias to assume that signs represent actions and not objects. The similarities between some signs and gestures could be explained by deaf signers and hearing gesturers sharing a conceptual substrate that is rooted in our embodied experiences with the world. The finding that gestural knowledge can ease the interpretation of the meaning of novel signs and predicts iconicity ratings is in line with embodied accounts of cognition and the influence of prior knowledge to acquire new schemas. Through these mechanisms we propose that iconic gestures that overlap in form with signs may serve as some type of ‘manual cognates’ that help non-signing adults to break into a new language at first exposure.

    Additional information

    Supplementary Materials
  • Ostarek, M., Joosen, D., Ishag, A., De Nijs, M., & Huettig, F. (2019). Are visual processes causally involved in “perceptual simulation” effects in the sentence-picture verification task? Cognition, 182, 84-94. doi:10.1016/j.cognition.2018.08.017.

    Abstract

    Many studies have shown that sentences implying an object to have a certain shape produce a robust reaction time advantage for shape-matching pictures in the sentence-picture verification task. Typically, this finding has been interpreted as evidence for perceptual simulation, i.e., that access to implicit shape information involves the activation of modality-specific visual processes. It follows from this proposal that disrupting visual processing during sentence comprehension should interfere with perceptual simulation and obliterate the match effect. Here we directly test this hypothesis. Participants listened to sentences while seeing either visual noise that was previously shown to strongly interfere with basic visual processing or a blank screen. Experiments 1 and 2 replicated the match effect but crucially visual noise did not modulate it. When an interference technique was used that targeted high-level semantic processing (Experiment 3) however the match effect vanished. Visual noise specifically targeting high-level visual processes (Experiment 4) only had a minimal effect on the match effect. We conclude that the shape match effect in the sentence-picture verification paradigm is unlikely to rely on perceptual simulation.
  • Ostarek, M., Van Paridon, J., & Montero-Melis, G. (2019). Sighted people’s language is not helpful for blind individuals’ acquisition of typical animal colors. Proceedings of the National Academy of Sciences of the United States of America, 116(44), 21972-21973. doi:10.1073/pnas.1912302116.
  • Ostarek, M., & Huettig, F. (2019). Six challenges for embodiment research. Current Directions in Psychological Science, 28(6), 593-599. doi:10.1177/0963721419866441.

    Abstract

    20 years after Barsalou's seminal perceptual symbols paper (Barsalou, 1999), embodied cognition, the notion that cognition involves simulations of sensory, motor, or affective states, has moved in status from an outlandish proposal advanced by a fringe movement in psychology to a mainstream position adopted by large numbers of researchers in the psychological and cognitive (neuro)sciences. While it has generated highly productive work in the cognitive sciences as a whole, it had a particularly strong impact on research into language comprehension. The view of a mental lexicon based on symbolic word representations, which are arbitrarily linked to sensory aspects of their referents, for example, was generally accepted since the cognitive revolution in the 1950s. This has radically changed. Given the current status of embodiment as a main theory of cognition, it is somewhat surprising that a close look at the state of the affairs in the literature reveals that the debate about the nature of the processes involved in language comprehension is far from settled and key questions remain unanswered. We present several suggestions for a productive way forward.
  • Otake, T., & Cutler, A. (2013). Lexical selection in action: Evidence from spontaneous punning. Language and Speech, 56(4), 555-573. doi:10.1177/0023830913478933.

    Abstract

    Analysis of a corpus of spontaneously produced Japanese puns from a single speaker over a two-year period provides a view of how a punster selects a source word for a pun and transforms it into another word for humorous effect. The pun-making process is driven by a principle of similarity: the source word should as far as possible be preserved (in terms of segmental sequence) in the pun. This renders homophones (English example: band–banned) the pun type of choice, with part–whole relationships of embedding (cap–capture), and mutations of the source word (peas–bees) rather less favored. Similarity also governs mutations in that single-phoneme substitutions outnumber larger changes, and in phoneme substitutions, subphonemic features tend to be preserved. The process of spontaneous punning thus applies, on line, the same similarity criteria as govern explicit similarity judgments and offline decisions about pun success (e.g., for inclusion in published collections). Finally, the process of spoken-word recognition is word-play-friendly in that it involves multiple word-form activation and competition, which, coupled with known techniques in use in difficult listening conditions, enables listeners to generate most pun types as offshoots of normal listening procedures.
  • Ozturk, O., Shayan, S., Liszkowski, U., & Majid, A. (2013). Language is not necessary for color categories. Developmental Science, 16, 111-115. doi:10.1111/desc.12008.

    Abstract

    The origin of color categories is under debate. Some researchers argue that color categories are linguistically constructed, while others claim they have a pre-linguistic, and possibly even innate, basis. Although there is some evidence that 4–6-month-old infants respond categorically to color, these empirical results have been challenged in recent years. First, it has been claimed that previous demonstrations of color categories in infants may reflect color preferences instead. Second, and more seriously, other labs have reported failing to replicate the basic findings at all. In the current study we used eye-tracking to test 8-month-old infants’ categorical perception of a previously attested color boundary (green–blue) and an additional color boundary (blue–purple). Our results show that infants are faster and more accurate at fixating targets when they come from a different color category than when from the same category (even though the chromatic separation sizes were equated). This is the case for both blue–green and blue–purple. Our findings provide independent evidence for the existence of color categories in pre-linguistic infants, and suggest that categorical perception of color can occur without color language.
  • Peeters, D., Vanlangendonck, F., Rüschemeyer, S.-A., & Dijkstra, T. (2019). Activation of the language control network in bilingual visual word recognition. Cortex, 111, 63-73. doi:10.1016/j.cortex.2018.10.012.

    Abstract

    Research into bilingual language production has identified a language control network that subserves control operations when bilinguals produce speech. Here we explore which brain areas are recruited for control purposes in bilingual language comprehension. In two experimental fMRI sessions, Dutch-English unbalanced bilinguals read words that differed in cross-linguistic form and meaning overlap across their two languages. The need for control operations was further manipulated by varying stimulus list composition across the two experimental sessions. We observed activation of the language control network in bilingual language comprehension as a function of both cross-linguistic form and meaning overlap and stimulus list composition. These findings suggest that the language control network is shared across bilingual language production and comprehension. We argue that activation of the language control network in language comprehension allows bilinguals to quickly and efficiently grasp the context-relevant meaning of words.

    Additional information

    1-s2.0-S0010945218303459-mmc1.docx
  • Peeters, D., Dijkstra, T., & Grainger, J. (2013). The representation and processing of identical cognates by late bilinguals: RT and ERP effects. Journal of Memory and Language, 68, 315-332. doi:10.1016/j.jml.2012.12.003.

    Abstract

    Across the languages of a bilingual, translation equivalents can have the same orthographic form and shared meaning (e.g., TABLE in French and English). How such words, called orthographically identical cognates, are processed and represented in the bilingual brain is not well understood. In the present study, late French–English bilinguals processed such identical cognates and control words in an English lexical decision task. Both behavioral and electrophysiological data were collected. Reaction times to identical cognates were shorter than for non-cognate controls and depended on both English and French frequency. Cognates with a low English frequency showed a larger cognate advantage than those with a high English frequency. In addition, N400 amplitude was found to be sensitive to cognate status and both the English and French frequency of the cognate words. Theoretical consequences for the processing and representation of identical cognates are discussed.
  • Peeters, D. (2019). Virtual reality: A game-changing method for the language sciences. Psychonomic Bulletin & Review, 26(3), 894-900. doi:10.3758/s13423-019-01571-3.

    Abstract

    This paper introduces virtual reality as an experimental method for the language sciences and provides a review of recent studies using the method to answer fundamental, psycholinguistic research questions. It is argued that virtual reality demonstrates that ecological validity and
    experimental control should not be conceived of as two extremes on a continuum, but rather as two orthogonal factors. Benefits of using virtual reality as an experimental method include that in a virtual environment, as in the real world, there is no artificial spatial divide between participant and stimulus. Moreover, virtual reality experiments do not necessarily have to include a repetitive trial structure or an unnatural experimental task. Virtual agents outperform experimental confederates in terms of the consistency and replicability of their behaviour, allowing for reproducible science across participants and research labs. The main promise of virtual reality as a tool for the experimental language sciences, however, is that it shifts theoretical focus towards the interplay between different modalities (e.g., speech, gesture, eye gaze, facial expressions) in dynamic and communicative real-world environments, complementing studies that focus on one modality (e.g. speech) in isolation.
  • Perlman, M., & Gibbs, R. W. (2013). Pantomimic gestures reveal the sensorimotor imagery of a human-fostered gorilla. Journal of Mental Imagery, 37(3/4), 73-96.

    Abstract

    This article describes the use of pantomimic gestures by the human-fostered gorilla, Koko, as evidence of her sensorimotor imagery. We present five video recorded instances of Koko's spontaneously created pantomimes during her interactions with human caregivers. The precise movements and context of each gesture are described in detail to examine how it functions to communicate Koko's requests for various objects and actions to be performed. Analysis assess the active "iconicity" of each targeted gesture and examines the underlying elements of sensorimotor imagery that are incorporated by the gesture. We suggest that Koko's pantomimes reflect an imaginative understanding of different actions, objects, and events that is similar in important respects with humans' embodied imagery capabilities.
  • Peter, M. S., & Rowland, C. F. (2019). Aligning developmental and processing accounts of implicit and statistical learning. Topics in Cognitive Science, 11, 555-572. doi:10.1111/tops.12396.

    Abstract

    A long‐standing question in child language research concerns how children achieve mature syntactic knowledge in the face of a complex linguistic environment. A widely accepted view is that this process involves extracting distributional regularities from the environment in a manner that is incidental and happens, for the most part, without the learner's awareness. In this way, the debate speaks to two associated but separate literatures in language acquisition: statistical learning and implicit learning. Both fields have explored this issue in some depth but, at present, neither the results from the infant studies used by the statistical learning literature nor the artificial grammar learning tasks studies from the implicit learning literature can be used to fully explain how children's syntax becomes adult‐like. In this work, we consider an alternative explanation—that children use error‐based learning to become mature syntax users. We discuss this proposal in the light of the behavioral findings from structural priming studies and the computational findings from Chang, Dell, and Bock's (2006) dual‐path model, which incorporates properties from both statistical and implicit learning, and offers an explanation for syntax learning and structural priming using a common error‐based learning mechanism. We then turn our attention to future directions for the field, here suggesting how structural priming might inform the statistical learning and implicit learning literature on the nature of the learning mechanism.
  • Peter, M. S., Durrant, S., Jessop, A., Bidgood, A., Pine, J. M., & Rowland, C. F. (2019). Does speed of processing or vocabulary size predict later language growth in toddlers? Cognitive Psychology, 115: 101238. doi:10.1016/j.cogpsych.2019.101238.

    Abstract

    It is becoming increasingly clear that the way that children acquire cognitive representations
    depends critically on how their processing system is developing. In particular, recent studies
    suggest that individual differences in language processing speed play an important role in explaining
    the speed with which children acquire language. Inconsistencies across studies, however,
    mean that it is not clear whether this relationship is causal or correlational, whether it is
    present right across development, or whether it extends beyond word learning to affect other
    aspects of language learning, like syntax acquisition. To address these issues, the current study
    used the looking-while-listening paradigm devised by Fernald, Swingley, and Pinto (2001) to test
    the speed with which a large longitudinal cohort of children (the Language 0–5 Project) processed
    language at 19, 25, and 31 months of age, and took multiple measures of vocabulary (UKCDI,
    Lincoln CDI, CDI-III) and syntax (Lincoln CDI) between 8 and 37 months of age. Processing
    speed correlated with vocabulary size - though this relationship changed over time, and was
    observed only when there was variation in how well the items used in the looking-while-listening
    task were known. Fast processing speed was a positive predictor of subsequent vocabulary
    growth, but only for children with smaller vocabularies. Faster processing speed did, however,
    predict faster syntactic growth across the whole sample, even when controlling for concurrent
    vocabulary. The results indicate a relatively direct relationship between processing speed and
    syntactic development, but point to a more complex interaction between processing speed, vocabulary
    size and subsequent vocabulary growth.
  • Petras, K., Ten Oever, S., Jacobs, C., & Goffaux, V. (2019). Coarse-to-fine information integration in human vision. NeuroImage, 186, 103-112. doi:10.1016/j.neuroimage.2018.10.086.

    Abstract

    Coarse-to-fine theories of vision propose that the coarse information carried by the low spatial frequencies (LSF) of visual input guides the integration of finer, high spatial frequency (HSF) detail. Whether and how LSF modulates HSF processing in naturalistic broad-band stimuli is still unclear. Here we used multivariate decoding of EEG signals to separate the respective contribution of LSF and HSF to the neural response evoked by broad-band images. Participants viewed images of human faces, monkey faces and phase-scrambled versions that were either broad-band or filtered to contain LSF or HSF. We trained classifiers on EEG scalp-patterns evoked by filtered scrambled stimuli and evaluated the derived models on broad-band scrambled and intact trials. We found reduced HSF contribution when LSF was informative towards image content, indicating that coarse information does guide the processing of fine detail, in line with coarse-to-fine theories. We discuss the potential cortical mechanisms underlying such coarse-to-fine feedback.

    Additional information

    Supplementary figures
  • Petzell, M., & Hammarström, H. (2013). Grammatical and lexical subclassification of the Morogoro region, Tanzania. Nordic journal of African Studies, 22(3), 129-157.

    Abstract

    This article discusses lexical and grammatical comparison and sub-grouping in a set of closely related Bantu language varieties in the Morogoro region, Tanzania. The Greater Ruvu Bantu language varieties include Kagulu [G12], Zigua [G31], Kwere [G32], Zalamo [G33], Nguu [G34], Luguru [G35], Kami [G36] and Kutu [G37]. The comparison is based on 27 morphophonological and morphosyntactic parameters, supplemented by a lexicon of 500 items. In order to determine the relationships and boundaries between the varieties, grammatical phenomena constitute a valuable complement to counting the number of identical words or cognates. We have used automated cognate judgment methods, as well as manual cognate judgments based on older sources, in order to compare lexical data. Finally, we have included speaker attitudes (i.e. self-assessment of linguistic similarity) in an attempt to map whether the languages that are perceived by speakers as being linguistically similar really are closely related.
  • Piai, V., Roelofs, A., Acheson, D. J., & Takashima, A. (2013). Attention for speaking: Neural substrates of general and specific mechanisms for monitoring and control. Frontiers in Human Neuroscience, 7: 832. doi:10.3389/fnhum.2013.00832.

    Abstract

    Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI); vocal color naming while ignoring distractors (Stroop); and manual object discrimination while ignoring spatial position (Simon task). All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex (ACC) that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus (STG). Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category) relative to incongruent (categorically related) and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the ACC, a region that is likely implementing domain-general attentional control.
  • Piai, V., Meyer, L., Schreuder, R., & Bastiaansen, M. C. M. (2013). Sit down and read on: Working memory and long-term memory in particle-verb processing. Brain and Language, 127(2), 296-306. doi:10.1016/j.bandl.2013.09.015.

    Abstract

    Particle verbs (e.g., look up) are lexical items for which particle and verb share a single lexical entry. Using event-related brain potentials, we examined working memory and long-term memory involvement in particle-verb processing. Dutch participants read sentences with head verbs that allow zero, two, or more than five particles to occur downstream. Additionally, sentences were presented for which the encountered particle was semantically plausible, semantically implausible, or forming a non-existing particle verb. An anterior negativity was observed at the verbs that potentially allow for a particle downstream relative to verbs that do not, possibly indexing storage of the verb until the dependency with its particle can be closed. Moreover, a graded N400 was found at the particle (smallest amplitude for plausible particles and largest for particles forming non-existing particle verbs), suggesting that lexical access to a shared lexical entry occurred at two separate time points.
  • Piai, V., & Roelofs, A. (2013). Working memory capacity and dual-task interference in picture naming. Acta Psychologica, 142, 332-342. doi:10.1016/j.actpsy.2013.01.006.
  • Poort, E. D., & Rodd, J. M. (2019). A database of Dutch–English cognates, interlingual homographs and translation equivalents. Journal of Cognition, 2(1): 15. doi:10.5334/joc.67.

    Abstract

    To investigate the structure of the bilingual mental lexicon, researchers in the field of bilingualism often use words that exist in multiple languages: cognates (which have the same meaning) and interlingual homographs (which have a different meaning). A high proportion of these studies have investigated language processing in Dutch–English bilinguals. Despite the abundance of research using such materials, few studies exist that have validated such materials. We conducted two rating experiments in which Dutch–English bilinguals rated the meaning, spelling and pronunciation similarity of pairs of Dutch and English words. On the basis of these results, we present a new database of Dutch–English identical cognates (e.g. “wolf”–“wolf”; n = 58), non-identical cognates (e.g. “kat”–“cat”; n = 74), interlingual homographs (e.g. “angel”–“angel”; n = 72) and translation equivalents (e.g. “wortel”–“carrot”; n = 78). The database can be accessed at http://osf.io/tcdxb/.

    Additional information

    database
  • Poort, E. D., & Rodd, J. M. (2019). Towards a distributed connectionist account of cognates and interlingual homographs: Evidence from semantic relatedness tasks. PeerJ, 7: e6725. doi:10.7717/peerj.6725.

    Abstract

    Background

    Current models of how bilinguals process cognates (e.g., “wolf”, which has the same meaning in Dutch and English) and interlingual homographs (e.g., “angel”, meaning “insect’s sting” in Dutch) are based primarily on data from lexical decision tasks. A major drawback of such tasks is that it is difficult—if not impossible—to separate processes that occur during decision making (e.g., response competition) from processes that take place in the lexicon (e.g., lateral inhibition). Instead, we conducted two English semantic relatedness judgement experiments.
    Methods

    In Experiment 1, highly proficient Dutch–English bilinguals (N = 29) and English monolinguals (N = 30) judged the semantic relatedness of word pairs that included a cognate (e.g., “wolf”–“howl”; n = 50), an interlingual homograph (e.g., “angel”–“heaven”; n = 50) or an English control word (e.g., “carrot”–“vegetable”; n = 50). In Experiment 2, another group of highly proficient Dutch–English bilinguals (N = 101) read sentences in Dutch that contained one of those cognates, interlingual homographs or the Dutch translation of one of the English control words (e.g., “wortel” for “carrot”) approximately 15 minutes prior to completing the English semantic relatedness task.
    Results

    In Experiment 1, there was an interlingual homograph inhibition effect of 39 ms only for the bilinguals, but no evidence for a cognate facilitation effect. Experiment 2 replicated these findings and also revealed that cross-lingual long-term priming had an opposite effect on the cognates and interlingual homographs: recent experience with a cognate in Dutch speeded processing of those items 15 minutes later in English but slowed processing of interlingual homographs. However, these priming effects were smaller than previously observed using a lexical decision task.
    Conclusion

    After comparing our results to studies in both the bilingual and monolingual domain, we argue that bilinguals appear to process cognates and interlingual homographs as monolinguals process polysemes and homonyms, respectively. In the monolingual domain, processing of such words is best modelled using distributed connectionist frameworks. We conclude that it is necessary to explore the viability of such a model for the bilingual case.
  • Postema, M., De Marco, M., Colato, E., & Venneri, A. (2019). A study of within-subject reliability of the brain’s default-mode network. Magnetic Resonance Materials in Physics, Biology and Medicine, 32(3), 391-405. doi:10.1007/s10334-018-00732-0.

    Abstract

    Objective

    Resting-state functional magnetic resonance imaging (fMRI) is promising for Alzheimer’s disease (AD). This study aimed to examine short-term reliability of the default-mode network (DMN), one of the main haemodynamic patterns of the brain.
    Materials and methods

    Using a 1.5 T Philips Achieva scanner, two consecutive resting-state fMRI runs were acquired on 69 healthy adults, 62 patients with mild cognitive impairment (MCI) due to AD, and 28 patients with AD dementia. The anterior and posterior DMN and, as control, the visual-processing network (VPN) were computed using two different methodologies: connectivity of predetermined seeds (theory-driven) and dual regression (data-driven). Divergence and convergence in network strength and topography were calculated with paired t tests, global correlation coefficients, voxel-based correlation maps, and indices of reliability.
    Results

    No topographical differences were found in any of the networks. High correlations and reliability were found in the posterior DMN of healthy adults and MCI patients. Lower reliability was found in the anterior DMN and in the VPN, and in the posterior DMN of dementia patients.
    Discussion

    Strength and topography of the posterior DMN appear relatively stable and reliable over a short-term period of acquisition but with some degree of variability across clinical samples.
  • Postema, M., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Busatto Filho, G., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Di Martino, A., Dinstein, I., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J., Feng, X. and 38 morePostema, M., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Busatto Filho, G., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Di Martino, A., Dinstein, I., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J., Feng, X., Fitzgerald, J., Floris, D. L., Freitag, C. M., Gallagher, L., Glahn, D. C., Gori, I., Haar, S., Hoekstra, L., Jahanshad, N., Jalbrzikowski, M., Janssen, J., King, J. A., Kong, X., Lazaro, L., Lerch, J. P., Luna, B., Martinho, M. M., McGrath, J., Medland, S. E., Muratori, F., Murphy, C. M., Murphy, D. G. M., O'Hearn, K., Oranje, B., Parellada, M., Puig, O., Retico, A., Rosa, P., Rubia, K., Shook, D., Taylor, M., Tosetti, M., Wallace, G. L., Zhou, F., Thompson, P., Fisher, S. E., Buitelaar, J. K., & Francks, C. (2019). Altered structural brain asymmetry in autism spectrum disorder in a study of 54 datasets. Nature Communications, 10: 4958. doi:10.1038/s41467-019-13005-8.
  • St Pourcain, B., Whitehouse, A. J. O., Ang, W. Q., Warrington, N. M., Glessner, J. T., Wang, K., Timpson, N. J., Evans, D. M., Kemp, J. P., Ring, S. M., McArdle, W. L., Golding, J., Hakonarson, H., Pennell, C. E., & Smith, G. (2013). Common variation contributes to the genetic architecture of social communication traits. Molecular Autism, 4: 34. doi:10.1186/2040-2392-4-34.

    Abstract

    Background: Social communication difficulties represent an autistic trait that is highly heritable and persistent during the course of development. However, little is known about the underlying genetic architecture of this phenotype. Methods: We performed a genome-wide association study on parent-reported social communication problems using items of the children’s communication checklist (age 10 to 11 years) studying single and/or joint marker effects. Analyses were conducted in a large UK population-based birth cohort (Avon Longitudinal Study of Parents and their Children, ALSPAC, N = 5,584) and followed-up within a sample of children with comparable measures from Western Australia (RAINE, N = 1364). Results: Two of our seven independent top signals (P- discovery <1.0E-05) were replicated (0.009 < P- replication ≤0.02) within RAINE and suggested evidence for association at 6p22.1 (rs9257616, meta-P = 2.5E-07) and 14q22.1 (rs2352908, meta-P = 1.1E-06). The signal at 6p22.1 was identified within the olfactory receptor gene cluster within the broader major histocompatibility complex (MHC) region. The strongest candidate locus within this genomic area was TRIM27. This gene encodes an ubiquitin E3 ligase, which is an interaction partner of methyl-CpG-binding domain (MBD) proteins, such as MBD3 and MBD4, and rare protein-coding mutations within MBD3 and MBD4 have been linked to autism. The signal at 14q22.1 was found within a gene-poor region. Single-variant findings were complemented by estimations of the narrow-sense heritability in ALSPAC suggesting that approximately a fifth of the phenotypic variance in social communication traits is accounted for by joint additive effects of genotyped single nucleotide polymorphisms throughout the genome (h2(SE) = 0.18(0.066), P = 0.0027). Conclusion: Overall, our study provides both joint and single-SNP-based evidence for the contribution of common polymorphisms to variation in social communication phenotypes.
  • Pouw, W., & Dixon, J. A. (2019). Entrainment and modulation of gesture-speech synchrony under delayed auditory feedback. Cognitive Science, 43(3): e12721. doi:10.1111/cogs.12721.

    Abstract

    Gesture–speech synchrony re-stabilizes when hand movement or speech is disrupted by a delayed
    feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it
    has also been argued from case studies in perceptual–motor pathology that hand gestures are a special
    kind of action that does not require closed-loop re-afferent feedback to maintain synchrony with
    speech. In the current pre-registered within-subject study, we used motion tracking to conceptually
    replicate McNeill’s (1992) classic study on gesture–speech synchrony under normal and 150 ms
    delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending
    McNeill’s original results, we obtain evidence that (a) gesture-speech synchrony is more stable
    under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably
    entrain to the external auditory delay as indicated by a consistent shift in gesture-speech synchrony
    offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are codependent.
    We suggest, therefore, that gesture–speech synchrony provides a way for the cognitive
    system to stabilize rhythmic activity under interfering conditions.

    Additional information

    https://osf.io/pcde3/
  • Pouw, W., Rop, G., De Koning, B., & Paas, F. (2019). The cognitive basis for the split-attention effect. Journal of Experimental Psychology: General, 148(11), 2058-2075. doi:10.1037/xge0000578.

    Abstract

    The split-attention effect entails that learning from spatially separated, but mutually referring information
    sources (e.g., text and picture), is less effective than learning from the equivalent spatially integrated
    sources. According to cognitive load theory, impaired learning is caused by the working memory load
    imposed by the need to distribute attention between the information sources and mentally integrate them.
    In this study, we directly tested whether the split-attention effect is caused by spatial separation per se.
    Spatial distance was varied in basic cognitive tasks involving pictures (Experiment 1) and text–picture
    combinations (Experiment 2; preregistered study), and in more ecologically valid learning materials
    (Experiment 3). Experiment 1 showed that having to integrate two pictorial stimuli at greater distances
    diminished performance on a secondary visual working memory task, but did not lead to slower
    integration. When participants had to integrate a picture and written text in Experiment 2, a greater
    distance led to slower integration of the stimuli, but not to diminished performance on the secondary task.
    Experiment 3 showed that presenting spatially separated (compared with integrated) textual and pictorial
    information yielded fewer integrative eye movements, but this was not further exacerbated when
    increasing spatial distance even further. This effect on learning processes did not lead to differences in
    learning outcomes between conditions. In conclusion, we provide evidence that larger distances between
    spatially separated information sources influence learning processes, but that spatial separation on its
    own is not likely to be the only, nor a sufficient, condition for impacting learning outcomes.

    Files private

    Request files
  • Preisig, B., Sjerps, M. J., Kösem, A., & Riecke, L. (2019). Dual-site high-density 4Hz transcranial alternating current stimulation applied over auditory and motor cortical speech areas does not influence auditory-motor mapping. Brain Stimulation, 12(3), 775-777. doi:10.1016/j.brs.2019.01.007.
  • Preisig, B., & Sjerps, M. J. (2019). Hemispheric specializations affect interhemispheric speech sound integration during duplex perception. The Journal of the Acoustical Society of America, 145, EL190-EL196. doi:10.1121/1.5092829.

    Abstract

    The present study investigated whether speech-related spectral information benefits from initially predominant right or left hemisphere processing. Normal hearing individuals categorized speech sounds composed of an ambiguous base (perceptually intermediate between /ga/ and /da/), presented to one ear, and a disambiguating low or high F3 chirp presented to the other ear. Shorter response times were found when the chirp was presented to the left ear than to the right ear (inducing initially right-hemisphere chirp processing), but no between-ear differences in strength of overall integration. The results are in line with the assumptions of a right hemispheric dominance for spectral processing.

    Additional information

    Supplementary material
  • Prystauka, Y., & Lewis, A. G. (2019). The power of neural oscillations to inform sentence comprehension: A linguistic perspective. Language and Linguistics Compass, 13 (9): e12347. doi:10.1111/lnc3.12347.

    Abstract

    The field of psycholinguistics is currently experiencing an explosion of interest in the analysis of neural oscillations—rhythmic brain activity synchronized at different temporal and spatial levels. Given that language comprehension relies on a myriad of processes, which are carried out in parallel in distributed brain networks, there is hope that this methodology might bring the field closer to understanding some of the more basic (spatially and temporally distributed, yet at the same time often overlapping) neural computations that support language function. In this review, we discuss existing proposals linking oscillatory dynamics in different frequency bands to basic neural computations and review relevant theories suggesting associations between band-specific oscillations and higher-level cognitive processes. More or less consistent patterns of oscillatory activity related to certain types of linguistic processing can already be derived from the evidence that has accumulated over the past few decades. The centerpiece of the current review is a synthesis of such patterns grouped by linguistic phenomenon. We restrict our review to evidence linking measures of oscillatory
    power to the comprehension of sentences, as well as linguistically (and/or pragmatically) more complex structures. For each grouping, we provide a brief summary and a table of associated oscillatory signatures that a psycholinguist might expect to find when employing a particular linguistic task. Summarizing across different paradigms, we conclude that a handful of basic neural oscillatory mechanisms are likely recruited in different ways and at different times for carrying out a variety of linguistic computations.
  • Quinn, S., & Kidd, E. (2019). Symbolic play promotes non‐verbal communicative exchange in infant–caregiver dyads. British Journal of Developmental Psychology, 37(1), 33-50. doi:10.1111/bjdp.12251.

    Abstract

    Symbolic play has long been considered a fertile context for communicative development (Bruner, 1983, Child's talk: Learning to use language, Oxford University Press, Oxford; Vygotsky, 1962, Thought and language, MIT Press, Cambridge, MA; Vygotsky, 1978, Mind in society: The development of higher psychological processes. Harvard University Press, Cambridge, MA). In the current study, we examined caregiver–infant interaction during symbolic play and compared it to interaction in a comparable but non‐symbolic context (i.e., ‘functional’ play). Fifty‐four (N = 54) caregivers and their 18‐month‐old infants were observed engaging in 20 min of play (symbolic, functional). Play interactions were coded and compared across play conditions for joint attention (JA) and gesture use. Compared with functional play, symbolic play was characterized by greater frequency and duration of JA and greater gesture use, particularly the use of iconic gestures with an object in hand. The results suggest that symbolic play provides a rich context for the exchange and negotiation of meaning, and thus may contribute to the development of important skills underlying communicative development.
  • Radenkovic, S., Bird, M. J., Emmerzaal, T. L., Wong, S. Y., Felgueira, C., Stiers, K. M., Sabbagh, L., Himmelreich, N., Poschet, G., Windmolders, P., Verheijen, J., Witters, P., Altassan, R., Honzik, T., Eminoglu, T. F., James, P. M., Edmondson, A. C., Hertecant, J., Kozicz, T., Thiel, C. and 5 moreRadenkovic, S., Bird, M. J., Emmerzaal, T. L., Wong, S. Y., Felgueira, C., Stiers, K. M., Sabbagh, L., Himmelreich, N., Poschet, G., Windmolders, P., Verheijen, J., Witters, P., Altassan, R., Honzik, T., Eminoglu, T. F., James, P. M., Edmondson, A. C., Hertecant, J., Kozicz, T., Thiel, C., Vermeersch, P., Cassiman, D., Beamer, L., Morava, E., & Ghesquiere, B. (2019). The metabolic map into the pathomechanism and treatment of PGM1-CDG. American Journal of Human Genetics, 104(5), 835-846. doi:10.1016/j.ajhg.2019.03.003.

    Abstract

    Phosphoglucomutase 1 (PGM1) encodes the metabolic enzyme that interconverts glucose-6-P and glucose-1-P. Mutations in PGM1 cause impairment in glycogen metabolism and glycosylation, the latter manifesting as a congenital disorder of glycosylation (CDG). This unique metabolic defect leads to abnormal N-glycan synthesis in the endoplasmic reticulum (ER) and the Golgi apparatus (GA). On the basis of the decreased galactosylation in glycan chains, galactose was administered to individuals with PGM1-CDG and was shown to markedly reverse most disease-related laboratory abnormalities. The disease and treatment mechanisms, however, have remained largely elusive. Here, we confirm the clinical benefit of galactose supplementation in PGM1-CDG-affected individuals and obtain significant insights into the functional and biochemical regulation of glycosylation. We report here that, by using tracer-based metabolomics, we found that galactose treatment of PGM1-CDG fibroblasts metabolically re-wires their sugar metabolism, and as such replenishes the depleted levels of galactose-1-P, as well as the levels of UDP-glucose and UDP-galactose, the nucleotide sugars that are required for ER- and GA-linked glycosylation, respectively. To this end, we further show that the galactose in UDP-galactose is incorporated into mature, de novo glycans. Our results also allude to the potential of monosaccharide therapy for several other CDG.
  • Räsänen, O., Seshadri, S., Karadayi, J., Riebling, E., Bunce, J., Cristia, A., Metze, F., Casillas, M., Rosemberg, C., Bergelson, E., & Soderstrom, M. (2019). Automatic word count estimation from daylong child-centered recordings in various language environments using language-independent syllabification of speech. Speech Communication, 113, 63-80. doi:10.1016/j.specom.2019.08.005.

    Abstract

    Automatic word count estimation (WCE) from audio recordings can be used to quantify the amount of verbal communication in a recording environment. One key application of WCE is to measure language input heard by infants and toddlers in their natural environments, as captured by daylong recordings from microphones worn by the infants. Although WCE is nearly trivial for high-quality signals in high-resource languages, daylong recordings are substantially more challenging due to the unconstrained acoustic environments and the presence of near- and far-field speech. Moreover, many use cases of interest involve languages for which reliable ASR systems or even well-defined lexicons are not available. A good WCE system should also perform similarly for low- and high-resource languages in order to enable unbiased comparisons across different cultures and environments. Unfortunately, the current state-of-the-art solution, the LENA system, is based on proprietary software and has only been optimized for American English, limiting its applicability. In this paper, we build on existing work on WCE and present the steps we have taken towards a freely available system for WCE that can be adapted to different languages or dialects with a limited amount of orthographically transcribed speech data. Our system is based on language-independent syllabification of speech, followed by a language-dependent mapping from syllable counts (and a number of other acoustic features) to the corresponding word count estimates. We evaluate our system on samples from daylong infant recordings from six different corpora consisting of several languages and socioeconomic environments, all manually annotated with the same protocol to allow direct comparison. We compare a number of alternative techniques for the two key components in our system: speech activity detection and automatic syllabification of speech. As a result, we show that our system can reach relatively consistent WCE accuracy across multiple corpora and languages (with some limitations). In addition, the system outperforms LENA on three of the four corpora consisting of different varieties of English. We also demonstrate how an automatic neural network-based syllabifier, when trained on multiple languages, generalizes well to novel languages beyond the training data, outperforming two previously proposed unsupervised syllabifiers as a feature extractor for WCE.
  • Ravignani, A., Sonnweber, R.-S., Stobbe, N., & Fitch, W. T. (2013). Action at a distance: Dependency sensitivity in a New World primate. Biology Letters, 9(6): 0130852. doi:10.1098/rsbl.2013.0852.

    Abstract

    Sensitivity to dependencies (correspondences between distant items) in sensory stimuli plays a crucial role in human music and language. Here, we show that squirrel monkeys (Saimiri sciureus) can detect abstract, non-adjacent dependencies in auditory stimuli. Monkeys discriminated between tone sequences containing a dependency and those lacking it, and generalized to previously unheard pitch classes and novel dependency distances. This constitutes the first pattern learning study where artificial stimuli were designed with the species' communication system in mind. These results suggest that the ability to recognize dependencies represents a capability that had already evolved in humans’ last common ancestor with squirrel monkeys, and perhaps before.
  • Ravignani, A. (2019). [Review of the book Animal beauty: On the evolution of bological aesthetics by C. Nüsslein-Volhard]. Animal Behaviour, 155, 171-172. doi:10.1016/j.anbehav.2019.07.005.
  • Ravignani, A. (2019). [Review of the book The origins of musicality ed. by H. Honing]. Perception, 48(1), 102-105. doi:10.1177/0301006618817430.
  • Ravignani, A. (2019). Humans and other musical animals [Review of the book The evolving animal orchestra: In search of what makes us musical by Henkjan Honing]. Current Biology, 29(8), R271-R273. doi:10.1016/j.cub.2019.03.013.
  • Ravignani, A., & de Reus, K. (2019). Modelling animal interactive rhythms in communication. Evolutionary Bioinformatics, 15, 1-14. doi:10.1177/1176934318823558.

    Abstract

    Time is one crucial dimension conveying information in animal communication. Evolution has shaped animals’ nervous systems to produce signals with temporal properties fitting their socio-ecological niches. Many quantitative models of mechanisms underlying rhythmic behaviour exist, spanning insects, crustaceans, birds, amphibians, and mammals. However, these computational and mathematical models are often presented in isolation. Here, we provide an overview of the main mathematical models employed in the study of animal rhythmic communication among conspecifics. After presenting basic definitions and mathematical formalisms, we discuss each individual model. These computational models are then compared using simulated data to uncover similarities and key differences in the underlying mechanisms found across species. Our review of the empirical literature is admittedly limited. We stress the need of using comparative computer simulations – both before and after animal experiments – to better understand animal timing in interaction. We hope this article will serve as a potential first step towards a common computational framework to describe temporal interactions in animals, including humans.

    Additional information

    Supplemental material files
  • Ravignani, A., Verga, L., & Greenfield, M. D. (2019). Interactive rhythms across species: The evolutionary biology of animal chorusing and turn-taking. Annals of the New York Academy of Sciences, 1453(1), 12-21. doi:10.1111/nyas.14230.

    Abstract

    The study of human language is progressively moving toward comparative and interactive frameworks, extending the concept of turn‐taking to animal communication. While such an endeavor will help us understand the interactive origins of language, any theoretical account for cross‐species turn‐taking should consider three key points. First, animal turn‐taking must incorporate biological studies on animal chorusing, namely how different species coordinate their signals over time. Second, while concepts employed in human communication and turn‐taking, such as intentionality, are still debated in animal behavior, lower level mechanisms with clear neurobiological bases can explain much of animal interactive behavior. Third, social behavior, interactivity, and cooperation can be orthogonal, and the alternation of animal signals need not be cooperative. Considering turn‐taking a subset of chorusing in the rhythmic dimension may avoid overinterpretation and enhance the comparability of future empirical work.
  • Ravignani, A. (2019). Everything you always wanted to know about sexual selection in 129 pages [Review of the book Sexual selection: A very short introduction by M. Zuk and L. W. Simmons]. Journal of Mammalogy, 100(6), 2004-2005. doi:10.1093/jmammal/gyz168.
  • Ravignani, A., & Gamba, M. (2019). Evolving musicality [Review of the book The evolving animal orchestra: In search of what makes us musical by Henkjan Honing]. Trends in Ecology and Evolution, 34(7), 583-584. doi:10.1016/j.tree.2019.04.016.
  • Ravignani, A., Kello, C. T., de Reus, K., Kotz, S. A., Dalla Bella, S., Mendez-Arostegui, M., Rapado-Tamarit, B., Rubio-Garcia, A., & de Boer, B. (2019). Ontogeny of vocal rhythms in harbor seal pups: An exploratory study. Current Zoology, 65(1), 107-120. doi:10.1093/cz/zoy055.

    Abstract

    Puppyhood is a very active social and vocal period in a harbor seal's life Phoca vitulina. An important feature of vocalizations is their temporal and rhythmic structure, and understanding vocal timing and rhythms in harbor seals is critical to a cross-species hypothesis in evolutionary neuroscience that links vocal learning, rhythm perception, and synchronization. This study utilized analytical techniques that may best capture rhythmic structure in pup vocalizations with the goal of examining whether (1) harbor seal pups show rhythmic structure in their calls and (2) rhythms evolve over time. Calls of 3 wild-born seal pups were recorded daily over the course of 1-3 weeks; 3 temporal features were analyzed using 3 complementary techniques. We identified temporal and rhythmic structure in pup calls across different time windows. The calls of harbor seal pups exhibit some degree of temporal and rhythmic organization, which evolves over puppyhood and resembles that of other species' interactive communication. We suggest next steps for investigating call structure in harbor seal pups and propose comparative hypotheses to test in other pinniped species.
  • Ravignani, A., Filippi, P., & Fitch, W. T. (2019). Perceptual tuning influences rule generalization: Testing humans with monkey-tailored stimuli. i-Perception, 10(2), 1-5. doi:10.1177/2041669519846135.

    Abstract

    Comparative research investigating how nonhuman animals generalize patterns of auditory stimuli often uses sequences of human speech syllables and reports limited generalization abilities in animals. Here, we reverse this logic, testing humans with stimulus sequences tailored to squirrel monkeys. When test stimuli are familiar (human voices), humans succeed in two types of generalization. However, when the same structural rule is instantiated over unfamiliar but perceivable sounds within squirrel monkeys’ optimal hearing frequency range, human participants master only one type of generalization. These findings have methodological implications for the design of comparative experiments, which should be fair towards all tested species’ proclivities and limitations.

    Additional information

    Supplemental material files
  • Ravignani, A., Olivera, M. V., Gingras, B., Hofer, R., Hernandez, R. C., Sonnweber, R. S., & Fitch, T. W. (2013). Primate drum kit: A system for studying acoustic pattern production by non-human primates using acceleration and strain sensors. Sensors, 13(8), 9790-9820. doi:10.3390/s130809790.

    Abstract

    The possibility of achieving experimentally controlled, non-vocal acoustic production in non-human primates is a key step to enable the testing of a number of hypotheses on primate behavior and cognition. However, no device or solution is currently available, with the use of sensors in non-human animals being almost exclusively devoted to applications in food industry and animal surveillance. Specifically, no device exists which simultaneously allows: (i) spontaneous production of sound or music by non-human animals via object manipulation, (ii) systematical recording of data sensed from these movements, (iii) the possibility to alter the acoustic feedback properties of the object using remote control. We present two prototypes we developed for application with chimpanzees (Pan troglodytes) which, while fulfilling the aforementioned requirements, allow to arbitrarily associate sounds to physical object movements. The prototypes differ in sensing technology, costs, intended use and construction requirements. One prototype uses four piezoelectric elements embedded between layers of Plexiglas and foam. Strain data is sent to a computer running Python through an Arduino board. A second prototype consists in a modified Wii Remote contained in a gum toy. Acceleration data is sent via Bluetooth to a computer running Max/MSP. We successfully pilot tested the first device with a group of chimpanzees. We foresee using these devices for a range of cognitive experiments.
  • Ravignani, A. (2019). Singing seals imitate human speech. Journal of Experimental Biology, 222: jeb208447. doi:10.1242/jeb.208447.
  • Ravignani, A. (2019). Rhythm and synchrony in animal movement and communication. Current Zoology, 65(1), 77-81. doi:10.1093/cz/zoy087.

    Abstract

    Animal communication and motoric behavior develop over time. Often, this temporal dimension has communicative relevance and is organized according to structural patterns. In other words, time is a crucial dimension for rhythm and synchrony in animal movement and communication. Rhythm is defined as temporal structure at a second-millisecond time scale (Kotz et al. 2018). Synchrony is defined as precise co-occurrence of 2 behaviors in time (Ravignani 2017).

    Rhythm, synchrony, and other forms of temporal interaction are taking center stage in animal behavior and communication. Several critical questions include, among others: what species show which rhythmic predispositions? How does a species’ sensitivity for, or proclivity towards, rhythm arise? What are the species-specific functions of rhythm and synchrony, and are there functional trends across species? How did similar or different rhythmic behaviors evolved in different species? This Special Column aims at collecting and contrasting research from different species, perceptual modalities, and empirical methods. The focus is on timing, rhythm and synchrony in the second-millisecond range.

    Three main approaches are commonly adopted to study animal rhythms, with a focus on: 1) spontaneous individual rhythm production, 2) group rhythms, or 3) synchronization experiments. I concisely introduce them below (see also Kotz et al. 2018; Ravignani et al. 2018).
  • Ravignani, A., Dalla Bella, S., Falk, S., Kello, C. T., Noriega, F., & Kotz, S. A. (2019). Rhythm in speech and animal vocalizations: A cross‐species perspective. Annals of the New York Academy of Sciences, 1453(1), 79-98. doi:10.1111/nyas.14166.

    Abstract

    Why does human speech have rhythm? As we cannot travel back in time to witness how speech developed its rhythmic properties and why humans have the cognitive skills to process them, we rely on alternative methods to find out. One powerful tool is the comparative approach: studying the presence or absence of cognitive/behavioral traits in other species to determine which traits are shared between species and which are recent human inventions. Vocalizations of many species exhibit temporal structure, but little is known about how these rhythmic structures evolved, are perceived and produced, their biological and developmental bases, and communicative functions. We review the literature on rhythm in speech and animal vocalizations as a first step toward understanding similarities and differences across species. We extend this review to quantitative techniques that are useful for computing rhythmic structure in acoustic sequences and hence facilitate cross‐species research. We report links between vocal perception and motor coordination and the differentiation of rhythm based on hierarchical temporal structure. While still far from a complete cross‐species perspective of speech rhythm, our review puts some pieces of the puzzle together.
  • Ravignani, A. (2019). Seeking shared ground in space. Science, 366(6466), 696. doi:10.1126/science.aay6955.
  • Ravignani, A. (2019). Timing of antisynchronous calling: A case study in a harbor seal pup (Phoca vitulina). Journal of Comparative Psychology, 133(2), 272-277. doi:10.1037/com0000160.

    Abstract

    Alternative mathematical models predict differences in how animals adjust the timing of their calls. Differences can be measured as the effect of the timing of a conspecific call on the rate and period of calling of a focal animal, and the lag between the two. Here, I test these alternative hypotheses by tapping into harbor seals’ (Phoca vitulina) mechanisms for spontaneous timing. Both socioecology and vocal behavior of harbor seals make them an interesting model species to study call rhythm and timing. Here, a wild-born seal pup was tested in controlled laboratory conditions. Based on previous recordings of her vocalizations and those of others, I designed playback experiments adapted to that specific animal. The call onsets of the animal were measured as a function of tempo, rhythmic regularity, and spectral properties of the playbacks. The pup adapted the timing of her calls in response to conspecifics’ calls. Rather than responding at a fixed time delay, the pup adjusted her calls’ onset to occur at a fraction of the playback tempo, showing a relative-phase antisynchrony. Experimental results were confirmed via computational modeling. This case study lends preliminary support to a classic mathematical model of animal behavior—Hamilton’s selfish herd—in the acoustic domain.
  • Ravignani, A. (2019). Understanding mammals, hands-on [Review of the book Mammalogy techniques lab manual by J. M. Ryan]. Journal of Mammalogy, 100(5), 1695-1696. doi:10.1093/jmammal/gyz132.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2019). Larger communities create more systematic languages. Proceedings of the Royal Society B: Biological Sciences, 286(1907): 20191262. doi:10.1098/rspb.2019.1262.

    Abstract

    Understanding worldwide patterns of language diversity has long been a goal for evolutionary scientists, linguists and philosophers. Research over the past decade has suggested that linguistic diversity may result from differences in the social environments in which languages evolve. Specifically, recent work found that languages spoken in larger communities typically have more systematic grammatical structures. However, in the real world, community size is confounded with other social factors such as network structure and the number of second languages learners in the community, and it is often assumed that linguistic simplification is driven by these factors instead. Here, we show that in contrast to previous assumptions, community size has a unique and important influence on linguistic structure. We experimentally examine the live formation of new languages created in the laboratory by small and larger groups, and find that larger groups of interacting participants develop more systematic languages over time, and do so faster and more consistently than small groups. Small groups also vary more in their linguistic behaviours, suggesting that small communities are more vulnerable to drift. These results show that community size predicts patterns of language diversity, and suggest that an increase in community size might have contributed to language evolution.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2019). Compositional structure can emerge without generational transmission. Cognition, 182, 151-164. doi:10.1016/j.cognition.2018.09.010.

    Abstract

    Experimental work in the field of language evolution has shown that novel signal systems become more structured over time. In a recent paper, Kirby, Tamariz, Cornish, and Smith (2015) argued that compositional languages can emerge only when languages are transmitted across multiple generations. In the current paper, we show that compositional languages can emerge in a closed community within a single generation. We conducted a communication experiment in which we tested the emergence of linguistic structure in different micro-societies of four participants, who interacted in alternating dyads using an artificial language to refer to novel meanings. Importantly, the communication included two real-world aspects of language acquisition and use, which introduce compressibility pressures: (a) multiple interaction partners and (b) an expanding meaning space. Our results show that languages become significantly more structured over time, with participants converging on shared, stable, and compositional lexicons. These findings indicate that new learners are not necessary for the formation of linguistic structure within a community, and have implications for related fields such as developing sign languages and creoles.
  • Reber, S. A., Šlipogor, V., Oh, J., Ravignani, A., Hoeschele, M., Bugnyar, T., & Fitch, W. T. (2019). Common marmosets are sensitive to simple dependencies at variable distances in an artificial grammar. Evolution and Human Behavior, 40(2), 214-221. doi:10.1016/j.evolhumbehav.2018.11.006.

    Abstract

    Recognizing that two elements within a sequence of variable length depend on each other is a key ability in understanding the structure of language and music. Perception of such interdependencies has previously been documented in chimpanzees in the visual domain and in human infants and common squirrel monkeys with auditory playback experiments, but it remains unclear whether it typifies primates in general. Here, we investigated the ability of common marmosets (Callithrix jacchus) to recognize and respond to such dependencies. We tested subjects in a familiarization-discrimination playback experiment using stimuli composed of pure tones that either conformed or did not conform to a grammatical rule. After familiarization to sequences with dependencies, marmosets spontaneously discriminated between sequences containing and lacking dependencies (‘consistent’ and ‘inconsistent’, respectively), independent of stimulus length. Marmosets looked more often to the sound source when hearing sequences consistent with the familiarization stimuli, as previously found in human infants. Crucially, looks were coded automatically by computer software, avoiding human bias. Our results support the hypothesis that the ability to perceive dependencies at variable distances was already present in the common ancestor of all anthropoid primates (Simiiformes).
  • Redmann, A., FitzPatrick, I., & Indefrey, P. (2019). The time course of colour congruency effects in picture naming. Acta Psychologica, 196, 96-108. doi:10.1016/j.actpsy.2019.04.005.

    Abstract

    In our interactions with people and objects in the world around us, as well as in communicating our thoughts, we
    rely on the use of conceptual knowledge stored in long-term memory. From a frame-theoretic point of view, a
    concept is represented by a central node and recursive attribute-value structures further specifying the concept.
    The present study explores whether and how the activation of an attribute within a frame might influence access
    to the concept's name in language production, focussing on the colour attribute. Colour has been shown to
    contribute to object recognition, naming, and memory retrieval, and there is evidence that colour plays a different
    role in naming objects that have a typical colour (high colour-diagnostic objects such as tomatoes) than in
    naming objects without a typical colour (low colour-diagnostic objects such as bicycles). We report two behavioural
    experiments designed to reveal potential effects of the activation of an object's typical colour on naming
    the object in a picture-word interference paradigm. This paradigm was used to investigate whether naming is
    facilitated when typical colours are presented alongside the to-be-named picture (e.g., the word “red” superimposed
    on the picture of a tomato), compared to atypical colours (such as “brown”), unrelated adjectives (such
    as “fast”), or random letter strings. To further explore the time course of these potential effects, the words were
    presented at different time points relative to the to-be-named picture (Exp. 1: −400 ms, Exp. 2: −200 ms, 0 ms,
    and+200 ms). By including both high and low colour-diagnostic objects, it was possible to explore whether the
    activation of a colour differentially affects naming of objects that have a strong association with a typical colour.
    The results showed that (pre-)activation of the appropriate colour attribute facilitated naming compared to an
    inappropriate colour. This was only the case for objects closely connected with a typical colour. Consequences of
    these findings for frame-theoretic accounts of conceptual representation are discussed.
  • Reesink, G. (2013). Expressing the GIVE event in Papuan languages: A preliminary survey. Linguistic Typology, 17(2), 217-266. doi:10.1515/lity-2013-0010.

    Abstract

    The linguistic expression of the GIVE event is investigated in a sample of 72 Papuan languages, 33 belonging to the Trans New Guinea family, 39 of various non-TNG lineages. Irrespective of the verbal template (prefix, suffix, or no indexation of undergoer), in the majority of languages the recipient is marked as the direct object of a monotransitive verb, which sometimes involves stem suppletion for the recipient. While a few languages allow verbal affixation for all three arguments, a number of languages challenge the universal claim that the `give' verb always has three arguments.
  • Regier, T., Khetarpal, N., & Majid, A. (2013). Inferring semantic maps. Linguistic Typology, 17, 89-105. doi:10.1515/lity-2013-0003.

    Abstract

    Semantic maps are a means of representing universal structure underlying crosslanguage semantic variation. However, no algorithm has existed for inferring a graph-based semantic map from data. Here, we note that this open problem is formally identical to the known problem of inferring a social network from disease outbreaks. From this identity it follows that semantic map inference is computationally intractable, but that an efficient approximation algorithm for it exists. We demonstrate that this algorithm produces sensible semantic maps from two existing bodies of data. We conclude that universal semantic graph structure can be automatically approximated from cross-language semantic data.
  • Reinisch, E., Weber, A., & Mitterer, H. (2013). Listeners retune phoneme categories across languages. Journal of Experimental Psychology: Human Perception and Performance, 39, 75-86. doi:10.1037/a0027979.

    Abstract

    Native listeners adapt to noncanonically produced speech by retuning phoneme boundaries by means of lexical knowledge. We asked whether a second language lexicon can also guide category retuning and whether perceptual learning transfers from a second language (L2) to the native language (L1). During a Dutch lexical-decision task, German and Dutch listeners were exposed to unusual pronunciation variants in which word-final /f/ or /s/ was replaced by an ambiguous sound. At test, listeners categorized Dutch minimal word pairs ending in sounds along an /f/–/s/ continuum. Dutch L1 and German L2 listeners showed boundary shifts of a similar magnitude. Moreover, following exposure to Dutch-accented English, Dutch listeners also showed comparable effects of category retuning when they heard the same speaker speak her native language (Dutch) during the test. The former result suggests that lexical representations in a second language are specific enough to support lexically guided retuning, and the latter implies that production patterns in a second language are deemed a stable speaker characteristic likely to transfer to the native language; thus retuning of phoneme categories applies across languages.
  • Reinisch, E., & Sjerps, M. J. (2013). The uptake of spectral and temporal cues in vowel perception is rapidly influenced by context. Journal of Phonetics, 41, 101-116. doi:10.1016/j.wocn.2013.01.002.

    Abstract

    Speech perception is dependent on auditory information within phonemes such as spectral or temporal cues. The perception of those cues, however, is affected by auditory information in surrounding context (e.g., a fast context sentence can make a target vowel sound subjectively longer). In a two-by-two design the current experiments investigated when these different factors influence vowel perception. Dutch listeners categorized minimal word pairs such as /tɑk/–/taːk/ (“branch”–“task”) embedded in a context sentence. Critically, the Dutch /ɑ/–/aː/ contrast is cued by spectral and temporal information. We varied the second formant (F2) frequencies and durations of the target vowels. Independently, we also varied the F2 and duration of all segments in the context sentence. The timecourse of cue uptake on the targets was measured in a printed-word eye-tracking paradigm. Results show that the uptake of spectral cues slightly precedes the uptake of temporal cues. Furthermore, acoustic manipulations of the context sentences influenced the uptake of cues in the target vowel immediately. That is, listeners did not need additional time to integrate spectral or temporal cues of a target sound with auditory information in the context. These findings argue for an early locus of contextual influences in speech perception.
  • Reinisch, E., Jesse, A., & Nygaard, L. C. (2013). Tone of voice guides word learning in informative referential contexts. Quarterly Journal of Experimental Psychology, 66, 1227-1240. doi:10.1080/17470218.2012.736525.

    Abstract

    Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., “daxen”) spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant-ant for big-small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalise them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.
  • De Resende, N. C. A., Mota, M. B., & Seuren, P. A. M. (2019). The processing of grammatical gender agreement in Brazilian Portuguese: ERP evidence in favor of a single route. Journal of Psycholinguistic Research, 48(1), 181-198. doi:10.1007/s10936-018-9598-z.

    Abstract

    The present study used event-related potentials to investigate whether the processing of grammatical gender agreement involving gender regular and irregular forms recruit the same or distinct neurocognitive mechanisms and whether different grammatical gender agreement conditions elicit the same or diverse ERP signals. Native speakers of Brazilian Portuguese read sentences containing congruent and incongruent grammatical gender agreement between a determiner and a regular or an irregular form (condition 1) and between a regular or an irregular form and an adjective (condition 2). However, in condition 2, trials with incongruent regular forms elicited more positive ongoing waveforms than trial with incongruent irregular forms. We found a biphasic LAN/P600 effect for gender agreement violation involving regular and irregular forms in both conditions. Our findings suggest that gender agreement between determiner and nouns recruits the same neurocognitive mechanisms regardless of the nouns’ form and that, depending on the grammatical class of the words involved in gender agreement, differences in ERP signals can emerge
  • Riedel, M., Wittenburg, P., Reetz, J., van de Sanden, M., Rybicki, J., von Vieth, B. S., Fiameni, G., Mariani, G., Michelini, A., Cacciari, C., Elbers, W., Broeder, D., Verkerk, R., Erastova, E., Lautenschlaeger, M., Budich, R. G., Thielmann, H., Coveney, P., Zasada, S., Haidar, A. and 9 moreRiedel, M., Wittenburg, P., Reetz, J., van de Sanden, M., Rybicki, J., von Vieth, B. S., Fiameni, G., Mariani, G., Michelini, A., Cacciari, C., Elbers, W., Broeder, D., Verkerk, R., Erastova, E., Lautenschlaeger, M., Budich, R. G., Thielmann, H., Coveney, P., Zasada, S., Haidar, A., Buechner, O., Manzano, C., Memon, S., Memon, S., Helin, H., Suhonen, J., Lecarpentier, D., Koski, K., & Lippert, T. (2013). A data infrastructure reference model with applications: Towards realization of a ScienceTube vision with a data replication service. Journal of Internet Services and Applications, 4, 1-17. doi:10.1186/1869-0238-4-1.

    Abstract

    The wide variety of scientific user communities work with data since many years and thus have already a wide variety of data infrastructures in production today. The aim of this paper is thus not to create one new general data architecture that would fail to be adopted by each and any individual user community. Instead this contribution aims to design a reference model with abstract entities that is able to federate existing concrete infrastructures under one umbrella. A reference model is an abstract framework for understanding significant entities and relationships between them and thus helps to understand existing data infrastructures when comparing them in terms of functionality, services, and boundary conditions. A derived architecture from such a reference model then can be used to create a federated architecture that builds on the existing infrastructures that could align to a major common vision. This common vision is named as ’ScienceTube’ as part of this contribution that determines the high-level goal that the reference model aims to support. This paper will describe how a well-focused use case around data replication and its related activities in the EUDAT project aim to provide a first step towards this vision. Concrete stakeholder requirements arising from scientific end users such as those of the European Strategy Forum on Research Infrastructure (ESFRI) projects underpin this contribution with clear evidence that the EUDAT activities are bottom-up thus providing real solutions towards the so often only described ’high-level big data challenges’. The followed federated approach taking advantage of community and data centers (with large computational resources) further describes how data replication services enable data-intensive computing of terabytes or even petabytes of data emerging from ESFRI projects.
  • Rietveld, C. A., Medland, S. E., Derringer, J., Yang, J., Esko, T., Martin, N. W., Westra, H.-J., Shakhbazov, K., Abdellaoui, A., Agrawal, A., Albrecht, E., Alizadeh, B. Z., Amin, N., Barnard, J., Baumeister, S. E., Benke, K. S., Bielak, L. F., Boatman, J. A., Boyle, P. A., Davies, G. and 184 moreRietveld, C. A., Medland, S. E., Derringer, J., Yang, J., Esko, T., Martin, N. W., Westra, H.-J., Shakhbazov, K., Abdellaoui, A., Agrawal, A., Albrecht, E., Alizadeh, B. Z., Amin, N., Barnard, J., Baumeister, S. E., Benke, K. S., Bielak, L. F., Boatman, J. A., Boyle, P. A., Davies, G., de Leeuw, C., Eklund, N., Evans, D. S., Ferhmann, R., Fischer, K., Gieger, C., Gjessing, H. K., Hägg, S., Harris, J. R., Hayward, C., Holzapfel, C., Ibrahim-Verbaas, C. A., Ingelsson, E., Jacobsson, B., Joshi, P. K., Jugessur, A., Kaakinen, M., Kanoni, S., Karjalainen, J., Kolcic, I., Kristiansson, K., Kutalik, Z., Lahti, J., Lee, S. H., Lin, P., Lind, P. A., Liu, Y., Lohman, K., Loitfelder, M., McMahon, G., Vidal, P. M., Meirelles, O., Milani, L., Myhre, R., Nuotio, M.-L., Oldmeadow, C. J., Petrovic, K. E., Peyrot, W. J., Polasek, O., Quaye, L., Reinmaa, E., Rice, J. P., Rizzi, T. S., Schmidt, H., Schmidt, R., Smith, A. V., Smith, J. A., Tanaka, T., Terracciano, A., van der Loos, M. J. H. M., Vitart, V., Völzke, H., Wellmann, J., Yu, L., Zhao, W., Allik, J., Attia, J. R., Bandinelli, S., Bastardot, F., Beauchamp, J., Bennett, D. A., Berger, K., Bierut, L. J., Boomsma, D. I., Bültmann, U., Campbell, H., Chabris, C. F., Cherkas, L., Chung, M. K., Cucca, F., de Andrade, M., De Jager, P. L., De Neve, J.-E., Deary, I. J., Dedoussis, G. V., Deloukas, P., Dimitriou, M., Eiríksdóttir, G., Elderson, M. F., Eriksson, J. G., Evans, D. M., Faul, J. D., Ferrucci, L., Garcia, M. E., Grönberg, H., Guðnason, V., Hall, P., Harris, J. M., Harris, T. B., Hastie, N. D., Heath, A. C., Hernandez, D. G., Hoffmann, W., Hofman, A., Holle, R., Holliday, E. G., Hottenga, J.-J., Iacono, W. G., Illig, T., Järvelin, M.-R., Kähönen, M., Kaprio, J., Kirkpatrick, R. M., Kowgier, M., Latvala, A., Launer, L. J., Lawlor, D. A., Lehtimäki, T., Li, J., Lichtenstein, P., Lichtner, P., Liewald, D. C., Madden, P. A., Magnusson, P. K. E., Mäkinen, T. E., Masala, M., McGue, M., Metspalu, A., Mielck, A., Miller, M. B., Montgomery, G. W., Mukherjee, S., Nyholt, D. R., Oostra, B. A., Palmer, L. J., Palotie, A., Penninx, B. W. J. H., Perola, M., Peyser, P. A., Preisig, M., Räikkönen, K., Raitakari, O. T., Realo, A., Ring, S. M., Ripatti, S., Rivadeneira, F., Rudan, I., Rustichini, A., Salomaa, V., Sarin, A.-P., Schlessinger, D., Scott, R. J., Snieder, H., St Pourcain, B., Starr, J. M., Sul, J. H., Surakka, I., Svento, R., Teumer, A., Tiemeier, H., van Rooij, F. J. A., Van Wagoner, D. R., Vartiainen, E., Viikari, J., Vollenweider, P., Vonk, J. M., Waeber, G., Weir, D. R., Wichmann, H.-E., Widen, E., Willemsen, G., Wilson, J. F., Wright, A. F., Conley, D., Davey-Smith, G., Franke, L., Groenen, P. J. F., Hofman, A., Johannesson, M., Kardia, S. L. R., Krueger, R. F., Laibson, D., Martin, N. G., Meyer, M. N., Posthuma, D., Thurik, A. R., Timpson, N. J., Uitterlinden, A. G., van Duijn, C. M., Visscher, P. M., Benjamin, D. J., Cesarini, D., Koellinger, P. D., & Study LifeLines Cohort (2013). GWAS of 126,559 individuals identifies genetic variants associated with educational attainment. Science, 340(6139), 1467-1471. doi:10.1126/science.1235488.

    Abstract

    A genome-wide association study (GWAS) of educational attainment was conducted in a discovery sample of 101,069 individuals and a replication sample of 25,490. Three independent single-nucleotide polymorphisms (SNPs) are genome-wide significant (rs9320913, rs11584700, rs4851266), and all three replicate. Estimated effects sizes are small (coefficient of determination R(2) ≈ 0.02%), approximately 1 month of schooling per allele. A linear polygenic score from all measured SNPs accounts for ≈2% of the variance in both educational attainment and cognitive function. Genes in the region of the loci have previously been associated with health, cognitive, and central nervous system phenotypes, and bioinformatics analyses suggest the involvement of the anterior caudate nucleus. These findings provide promising candidate SNPs for follow-up work, and our effect size estimates can anchor power analyses in social-science genetics.

    Additional information

    Rietveld.SM.revision.2.pdf

Share this page