Publications

Displaying 301 - 400 of 527
  • Matić, D., & Wedgwood, D. (2013). The meanings of focus: The significance of an interpretation-based category in cross-linguistic analysis. Journal of Linguistics, 49, 127-163. doi:10.1017/S0022226712000345.

    Abstract

    Focus is regularly treated as a cross-linguistically stable category that is merely manifested by different structural means in different languages, such that a common focus feature may be realised through, for example, a morpheme in one language and syntactic movement in another. We demonstrate this conception of focus to be unsustainable on both theoretical and empirical grounds, invoking fundamental argumentation regarding the notions of focus and linguistic category, alongside data from a wide range of languages. Attempts to salvage a cross-linguistic notion of focus through parameterisation, the introduction of additional information-structural primitives such as contrast, or reduction to a single common factor are shown to be equally problematic. We identify the causes of repeated misconceptions about the nature of focus in a number of interrelated theoretical and methodological tendencies in linguistic analysis. We propose to see focus as a heuristic tool and to employ it as a means of identifying structural patterns that languages use to generate a certain number of related pragmatic effects, potentially through quite diverse mechanisms.
  • Mazzini, S., Holler, J., & Drijvers, L. (2023). Studying naturalistic human communication using dual-EEG and audio-visual recordings. STAR Protocols, 4(3): 102370. doi:10.1016/j.xpro.2023.102370.

    Abstract

    We present a protocol to study naturalistic human communication using dual-EEG and audio-visual recordings. We describe preparatory steps for data collection including setup preparation, experiment design, and piloting. We then describe the data collection process in detail which consists of participant recruitment, experiment room preparation, and data collection. We also outline the kinds of research questions that can be addressed with the current protocol, including several analysis possibilities, from conversational to advanced time-frequency analyses.
    For complete details on the use and execution of this protocol, please refer to Drijvers and Holler (2022).
  • Mazzone, M., & Campisi, E. (2013). Distributed intentionality: A model of intentional behavior in humans. Philosophical Psychology, 26, 267-290. doi:10.1080/09515089.2011.641743.

    Abstract

    Is human behavior, and more specifically linguistic behavior, intentional? Some scholars have proposed that action is driven in a top-down manner by one single intention—i.e.,one single conscious goal. Others have argued that actions are mostly non-intentional,insofar as often the single goal driving an action is not consciously represented. We intend to claim that both alternatives are unsatisfactory; more specifically, we claim that actions are intentional, but intentionality is distributed across complex goal-directed representations of action, rather than concentrated in single intentions driving action in a top-down manner. These complex representations encompass a multiplicity of goals, together with other components which are not goals themselves, and are the result of a largely automatic dynamic of activation; such an automatic processing, however, does not preclude the involvement of conscious attention, shifting from one component to the other of the overall goal-directed representation.

    Files private

    Request files
  • McConnell, K. (2023). Individual Differences in Holistic and Compositional Language Processing. Journal of Cognition, 6. doi:10.5334/joc.283.

    Abstract

    Individual differences in cognitive abilities are ubiquitous across the spectrum of proficient language users. Although speakers differ with regard to their memory capacity, ability for inhibiting distraction, and ability to shift between different processing levels, comprehension is generally successful. However, this does not mean it is identical across individuals; listeners and readers may rely on different processing strategies to exploit distributional information in the service of efficient understanding. In the following psycholinguistic reading experiment, we investigate potential sources of individual differences in the processing of co-occurring words. Participants read modifier-noun bigrams like absolute silence in a self-paced reading task. Backward transition probability (BTP) between the two lexemes was used to quantify the prominence of the bigram as a whole in comparison to the frequency of its parts. Of five individual difference measures (processing speed, verbal working memory, cognitive inhibition, global-local scope shifting, and personality), two proved to be significantly associated with the effect of BTP on reading times. Participants who could inhibit a distracting global environment in order to more efficiently retrieve a single part and those that preferred the local level in the shifting task showed greater effects of the co-occurrence probability of the parts. We conclude that some participants are more likely to retrieve bigrams via their parts and their co-occurrence statistics whereas others more readily retrieve the two words together as a single chunked unit.
  • McGettigan, C., Eisner, F., Agnew, Z. K., Manly, T., Wisbey, D., & Scott, S. K. (2013). T'ain't what you say, it's the way that you say it—Left insula and inferior frontal cortex work in interaction with superior temporal regions to control the performance of vocal impersonations. Journal of Cognitive Neuroscience, 25(11), 1875-1886. doi:10.1162/jocn_a_00427.

    Abstract

    Historically, the study of human identity perception has focused on faces, but the voice is also central to our expressions and experiences of identity [Belin, P., Fecteau, S., & Bedard, C. Thinking the voice: Neural correlates of voice perception. Trends in Cognitive Sciences, 8, 129–135, 2004]. Our voices are highly flexible and dynamic; talkers speak differently, depending on their health, emotional state, and the social setting, as well as extrinsic factors such as background noise. However, to date, there have been no studies of the neural correlates of identity modulation in speech production. In the current fMRI experiment, we measured the neural activity supporting controlled voice change in adult participants performing spoken impressions. We reveal that deliberate modulation of vocal identity recruits the left anterior insula and inferior frontal gyrus, supporting the planning of novel articulations. Bilateral sites in posterior superior temporal/inferior parietal cortex and a region in right middle/anterior STS showed greater responses during the emulation of specific vocal identities than for impressions of generic accents. Using functional connectivity analyses, we describe roles for these three sites in their interactions with the brain regions supporting speech planning and production. Our findings mark a significant step toward understanding the neural control of vocal identity, with wider implications for the cognitive control of voluntary motor acts.
  • McLean, B., Dunn, M., & Dingemanse, M. (2023). Two measures are better than one: Combining iconicity ratings and guessing experiments for a more nuanced picture of iconicity in the lexicon. Language and Cognition, 15(4), 719-739. doi:10.1017/langcog.2023.9.

    Abstract

    Iconicity in language is receiving increased attention from many fields, but our understanding of iconicity is only as good as the measures we use to quantify it. We collected iconicity measures for 304 Japanese words from English-speaking participants, using rating and guessing tasks. The words included ideophones (structurally marked depictive words) along with regular lexical items from similar semantic domains (e.g., fuwafuwa ‘fluffy’, jawarakai ‘soft’). The two measures correlated, speaking to their validity. However, ideophones received consistently higher iconicity ratings than other items, even when guessed at the same accuracies, suggesting the rating task is more sensitive to cues like structural markedness that frame words as iconic. These cues did not always guide participants to the meanings of ideophones in the guessing task, but they did make them more confident in their guesses, even when they were wrong. Consistently poor guessing results reflect the role different experiences play in shaping construals of iconicity. Using multiple measures in tandem allows us to explore the interplay between iconicity and these external factors. To facilitate this, we introduce a reproducible workflow for creating rating and guessing tasks from standardised wordlists, while also making improvements to the robustness, sensitivity and discriminability of previous approaches.
  • McQueen, J. M., Norris, D., & Cutler, A. (1994). Competition in spoken word recognition: Spotting words in other words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 621-638.

    Abstract

    Although word boundaries are rarely clearly marked, listeners can rapidly recognize the individual words of spoken sentences. Some theories explain this in terms of competition between multiply activated lexical hypotheses; others invoke sensitivity to prosodic structure. We describe a connectionist model, SHORTLIST, in which recognition by activation and competition is successful with a realistically sized lexicon. Three experiments are then reported in which listeners detected real words embedded in nonsense strings, some of which were themselves the onsets of longer words. Effects both of competition between words and of prosodic structure were observed, suggesting that activation and competition alone are not sufficient to explain word recognition in continuous speech. However, the results can be accounted for by a version of SHORTLIST that is sensitive to prosodic structure.
  • McQueen, J. M., Jesse, A., & Mitterer, H. (2023). Lexically mediated compensation for coarticulation still as elusive as a white christmash. Cognitive Science: a multidisciplinary journal, 47(9): e13342. doi:10.1111/cogs.13342.

    Abstract

    Luthra, Peraza-Santiago, Beeson, Saltzman, Crinnion, and Magnuson (2021) present data from the lexically mediated compensation for coarticulation paradigm that they claim provides conclusive evidence in favor of top-down processing in speech perception. We argue here that this evidence does not support that conclusion. The findings are open to alternative explanations, and we give data in support of one of them (that there is an acoustic confound in the materials). Lexically mediated compensation for coarticulation thus remains elusive, while prior data from the paradigm instead challenge the idea that there is top-down processing in online speech recognition.

    Additional information

    supplementary materials
  • Meyer, A. S., & Hagoort, P. (2013). What does it mean to predict one's own utterances? [Commentary on Pickering & Garrod]. Behavioral and Brain Sciences, 36, 367-368. doi:10.1017/S0140525X12002786.

    Abstract

    Many authors have recently highlighted the importance of prediction for language comprehension. Pickering & Garrod (P&G) are the first to propose a central role for prediction in language production. This is an intriguing idea, but it is not clear what it means for speakers to predict their own utterances, and how prediction during production can be empirically distinguished from production proper.
  • Meyer, A. S. (1994). Timing in sentence production. Journal of Memory and Language, 33, 471-492. doi:doi:10.1006/jmla.1994.1022.

    Abstract

    Recently, a new theory of timing in sentence production has been proposed by Ferreira (1993). This theory assumes that at the phonological level, each syllable of an utterance is assigned one or more abstract timing units depending on its position in the prosodic structure. The number of timing units associated with a syllable determines the time interval between its onset and the onset of the next syllable. An interesting prediction from the theory, which was confirmed in Ferreira's experiments with speakers of American English, is that the time intervals between syllable onsets should only depend on the syllables' positions in the prosodic structure, but not on their segmental content. However, in the present experiments, which were carried out in Dutch, the intervals between syllable onsets were consistently longer for phonetically long syllables than for short syllables. The implications of this result for models of timing in sentence production are discussed.
  • Meyer, A. S. (2023). Timing in conversation. Journal of Cognition, 6(1), 1-17. doi:10.5334/joc.268.

    Abstract

    Turn-taking in everyday conversation is fast, with median latencies in corpora of conversational speech often reported to be under 300 ms. This seems like magic, given that experimental research on speech planning has shown that speakers need much more time to plan and produce even the shortest of utterances. This paper reviews how language scientists have combined linguistic analyses of conversations and experimental work to understand the skill of swift turn-taking and proposes a tentative solution to the riddle of fast turn-taking.
  • Miceli, S., Negwer, M., van Eijs, F., Kalkhoven, C., van Lierop, I., Homberg, J., & Schubert, D. (2013). High serotonin levels during brain development alter the structural input-output connectivity of neural networks in the rat somatosensory layer IV. Frontiers in Cellular Neuroscience, 7: 88. doi:10.3389/fncel.2013.00088.

    Abstract

    Homeostatic regulation of serotonin (5-HT) concentration is critical for “normal” topographical organization and development of thalamocortical (TC) afferent circuits. Down-regulation of the serotonin transporter (SERT) and the consequent impaired reuptake of 5-HT at the synapse, results in a reduced terminal branching of developing TC afferents within the primary somatosensory cortex (S1). Despite the presence of multiple genetic models, the effect of high extracellular 5-HT levels on the structure and function of developing intracortical neural networks is far from being understood. Here, using juvenile SERT knockout (SERT−/−) rats we investigated, in vitro, the effect of increased 5-HT levels on the structural organization of (i) the TC projections of the ventroposteromedial thalamic nucleus toward S1, (ii) the general barrel-field pattern, and (iii) the electrophysiological and morphological properties of the excitatory cell population in layer IV of S1 [spiny stellate (SpSt) and pyramidal cells]. Our results confirmed previous findings that high levels of 5-HT during development lead to a reduction of the topographical precision of TCA projections toward the barrel cortex. Also, the barrel pattern was altered but not abolished in SERT−/− rats. In layer IV, both excitatory SpSt and pyramidal cells showed a significantly reduced intracolumnar organization of their axonal projections. In addition, the layer IV SpSt cells gave rise to a prominent projection toward the infragranular layer Vb. Our findings point to a structural and functional reorganization of TCAs, as well as early stage intracortical microcircuitry, following the disruption of 5-HT reuptake during critical developmental periods. The increased projection pattern of the layer IV neurons suggests that the intracortical network changes are not limited to the main entry layer IV but may also affect the subsequent stages of the canonical circuits of the barrel cortex.
  • Mickan, A., McQueen, J. M., Brehm, L., & Lemhöfer, K. (2023). Individual differences in foreign language attrition: A 6-month longitudinal investigation after a study abroad. Language, Cognition and Neuroscience, 38(1), 11-39. doi:10.1080/23273798.2022.2074479.

    Abstract

    While recent laboratory studies suggest that the use of competing languages is a driving force in foreign language (FL) attrition (i.e. forgetting), research on “real” attriters has failed to demonstrate
    such a relationship. We addressed this issue in a large-scale longitudinal study, following German students throughout a study abroad in Spain and their first six months back in Germany. Monthly,
    percentage-based frequency of use measures enabled a fine-grained description of language use.
    L3 Spanish forgetting rates were indeed predicted by the quantity and quality of Spanish use, and
    correlated negatively with L1 German and positively with L2 English letter fluency. Attrition rates
    were furthermore influenced by prior Spanish proficiency, but not by motivation to maintain
    Spanish or non-verbal long-term memory capacity. Overall, this study highlights the importance
    of language use for FL retention and sheds light on the complex interplay between language
    use and other determinants of attrition.
  • Miller, M., & Klein, W. (1981). Moral argumentations among children: A case study. Linguistische Berichte, 74, 1-19.
  • Minagawa-Kawai, Y., Cristia, A., Long, B., Vendelin, I., Hakuno, Y., Dutat, M., Filippin, L., Cabrol, D., & Dupoux, E. (2013). Insights on NIRS sensitivity from a cross-linguistic study on the emergence of phonological grammar. Frontiers in Psychology, 4: 170. doi:10.3389/fpsyg.2013.00170.

    Abstract

    Each language has a unique set of phonemic categories and phonotactic rules which determine permissible sound sequences in that language. Behavioral research demonstrates that one’s native language shapes the perception of both sound categories and sound sequences in adults, and neuroimaging results further indicate that the processing of native phonemes and phonotactics involves a left-dominant perisylvian brain network. Recent work using a novel technique, functional Near InfraRed Spectroscopy (NIRS), has suggested that a left-dominant network becomes evident toward the end of the first year of life as infants process phonemic contrasts. The present research project attempted to assess whether the same pattern would be seen for native phonotactics. We measured brain responses in Japanese- and French-learning infants to two contrasts: Abuna vs. Abna (a phonotactic contrast that is native in French, but not in Japanese) and Abuna vs. Abuuna (a vowel length contrast that is native in Japanese, but not in French). Results did not show a significant response to either contrast in either group, unlike both previous behavioral research on phonotactic processing and NIRS work on phonemic processing. To understand these null results, we performed similar NIRS experiments with Japanese adult participants. These data suggest that the infant null results arise from an interaction of multiple factors, involving the suitability of the experimental paradigm for NIRS measurements and stimulus perceptibility. We discuss the challenges facing this novel technique, particularly focusing on the optimal stimulus presentation which could yield strong enough hemodynamic responses when using the change detection paradigm.
  • Mishra, C., Offrede, T., Fuchs, S., Mooshammer, C., & Skantze, G. (2023). Does a robot’s gaze aversion affect human gaze aversion? Frontiers in Robotics and AI, 10: 1127626. doi:10.3389/frobt.2023.1127626.

    Abstract

    Gaze cues serve an important role in facilitating human conversations and are generally considered to be one of the most important non-verbal cues. Gaze cues are used to manage turn-taking, coordinate joint attention, regulate intimacy, and signal cognitive effort. In particular, it is well established that gaze aversion is used in conversations to avoid prolonged periods of mutual gaze. Given the numerous functions of gaze cues, there has been extensive work on modelling these cues in social robots. Researchers have also tried to identify the impact of robot gaze on human participants. However, the influence of robot gaze behavior on human gaze behavior has been less explored. We conducted a within-subjects user study (N = 33) to verify if a robot’s gaze aversion influenced human gaze aversion behavior. Our results show that participants tend to avert their gaze more when the robot keeps staring at them as compared to when the robot exhibits well-timed gaze aversions. We interpret our findings in terms of intimacy regulation: humans try to compensate for the robot’s lack of gaze aversion.
  • Mishra, C., Verdonschot, R. G., Hagoort, P., & Skantze, G. (2023). Real-time emotion generation in human-robot dialogue using large language models. Frontiers in Robotics and AI, 10: 1271610. doi:10.3389/frobt.2023.1271610.

    Abstract

    Affective behaviors enable social robots to not only establish better connections with humans but also serve as a tool for the robots to express their internal states. It has been well established that emotions are important to signal understanding in Human-Robot Interaction (HRI). This work aims to harness the power of Large Language Models (LLM) and proposes an approach to control the affective behavior of robots. By interpreting emotion appraisal as an Emotion Recognition in Conversation (ERC) tasks, we used GPT-3.5 to predict the emotion of a robot’s turn in real-time, using the dialogue history of the ongoing conversation. The robot signaled the predicted emotion using facial expressions. The model was evaluated in a within-subjects user study (N = 47) where the model-driven emotion generation was compared against conditions where the robot did not display any emotions and where it displayed incongruent emotions. The participants interacted with the robot by playing a card sorting game that was specifically designed to evoke emotions. The results indicated that the emotions were reliably generated by the LLM and the participants were able to perceive the robot’s emotions. It was found that the robot expressing congruent model-driven facial emotion expressions were perceived to be significantly more human-like, emotionally appropriate, and elicit a more positive impression. Participants also scored significantly better in the card sorting game when the robot displayed congruent facial expressions. From a technical perspective, the study shows that LLMs can be used to control the affective behavior of robots reliably in real-time. Additionally, our results could be used in devising novel human-robot interactions, making robots more effective in roles where emotional interaction is important, such as therapy, companionship, or customer service.
  • Mitterer, H., Kim, S., & Cho, T. (2013). Compensation for complete assimilation in speech perception: The case of Korean labial-to-velar assimilation. Journal of Memory and Language, 69, 59-83. doi:10.1016/j.jml.2013.02.001.

    Abstract

    In connected speech, phonological assimilation to neighboring words can lead to pronunciation variants (e.g., 'garden bench'→ "gardem bench"). A large body of literature suggests that listeners use the phonetic context to reconstruct the intended word for assimilation types that often lead to incomplete assimilations (e.g., a pronunciation of "garden" that carries cues for both a labial [m] and an alveolar [n]). In the current paper, we show that a similar context effect is observed for an assimilation that is often complete, Korean labial-to-velar place assimilation. In contrast to the context effects for partial assimilations, however, the context effects seem to rely completely on listeners' experience with the assimilation pattern in their native language.
  • Mitterer, H., & Russell, K. (2013). How phonological reductions sometimes help the listener. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 977-984. doi:10.1037/a0029196.

    Abstract

    In speech production, high-frequency words are more likely than low-frequency words to be phonologically reduced. We tested in an eye-tracking experiment whether listeners can make use of this correlation between lexical frequency and phonological realization of words. Participants heard prefixed verbs in which the prefix was either fully produced or reduced. Simultaneously, they saw a high-frequency verb and a low-frequency verb with this prefix-plus 2 distractors-on a computer screen. Participants were more likely to look at the high-frequency verb when they heard a reduced prefix than when they heard a fully produced prefix. Listeners hence exploit the correlation of lexical frequency and phonological reduction and assume that a reduced prefix is more likely to belong to a high-frequency word. This shows that reductions do not necessarily burden the listener but may in fact have a communicative function, in line with functional theories of phonology.
  • Mitterer, H., & Reinisch, E. (2013). No delays in application of perceptual learning in speech recognition: Evidence from eye tracking. Journal of Memory and Language, 69(4), 527-545. doi:10.1016/j.jml.2013.07.002.

    Abstract

    Three eye-tracking experiments tested at what processing stage lexically-guided retuning of a fricative contrast affects perception. One group of participants heard an ambiguous fricative between /s/ and /f/ replace /s/ in s-final words, the other group heard the same ambiguous fricative replacing /f/ in f-final words. In a test phase, both groups of participants heard a range of ambiguous fricatives at the end of Dutch minimal pairs (e.g., roos-roof, ‘rose’-‘robbery’). Participants who heard the ambiguous fricative replacing /f/ during exposure chose at test the f-final words more often than the other participants. During this test-phase, eye-tracking data showed that the effect of exposure exerted itself as soon as it could possibly have occurred, 200 ms after the onset of the fricative. This was at the same time as the onset of the effect of the fricative itself, showing that the perception of the fricative is changed by perceptual learning at an early level. Results converged in a time-window analysis and a Jackknife procedure testing the time at which effects reached a given proportion of their maxima. This indicates that perceptual learning affects early stages of speech processing, and supports the conclusion that perceptual learning is indeed perceptual rather than post-perceptual.

    Files private

    Request files
  • Mitterer, H., Scharenborg, O., & McQueen, J. M. (2013). Phonological abstraction without phonemes in speech perception. Cognition, 129, 356-361. doi:10.1016/j.cognition.2013.07.011.

    Abstract

    Recent evidence shows that listeners use abstract prelexical units in speech perception. Using the phenomenon of lexical retuning in speech processing, we ask whether those units are necessarily phonemic. Dutch listeners were exposed to a Dutch speaker producing ambiguous phones between the Dutch syllable-final allophones approximant [r] and dark [l]. These ambiguous phones replaced either final /r/ or final /l/ in words in a lexical-decision task. This differential exposure affected perception of ambiguous stimuli on the same allophone continuum in a subsequent phonetic-categorization test: Listeners exposed to ambiguous phones in /r/-final words were more likely to perceive test stimuli as /r/ than listeners with exposure in /l/-final words. This effect was not found for test stimuli on continua using other allophones of /r/ and /l/. These results confirm that listeners use phonological abstraction in speech perception. They also show that context-sensitive allophones can play a role in this process, and hence that context-insensitive phonemes are not necessary. We suggest there may be no one unit of perception
  • Mitterer, H., & Müsseler, J. M. (2013). Regional accent variation in the shadowing task: Evidence for a loose perception-action coupling in speech. Attention, Perception & Psychophysics, 75, 557-575. doi:10.3758/s13414-012-0407-8.

    Abstract

    We investigated the relation between action and perception in speech processing, using the shadowing task, in which participants repeat words they hear. In support of a tight perception–action link, previous work has shown that phonetic details in the stimulus influence the shadowing response. On the other hand, latencies do not seem to suffer if stimulus and response differ in their articulatory properties. The present investigation tested how perception influences production when participants are confronted with regional variation. Results showed that participants often imitate a regional variation if it occurs in the stimulus set but tend to stick to their variant if the stimuli are consistent. Participants were forced or induced to correct by the experimental instructions. Articulatory stimulus–response differences do not lead to latency costs. These data indicate that speech perception does not necessarily recruit the production system.
  • Moisik, S. R. (2013). Harsh voice quality and its association with blackness in popular American media. Phonetica, 4, 193-215. doi:10.1159/000351059.

    Abstract

    Performers use various laryngeal settings to create voices for characters and personas they portray. Although some research demonstrates the sociophonetic associations of laryngeal voice quality, few studies have documented or examined the role of harsh voice quality, particularly with vibration of the epilaryngeal structures (growling). This article qualitatively examines phonetic properties of vocal performances in a corpus of popular American media and evaluates the association of voice qualities in these performances with representations of social identity and stereotype. In several cases, contrasting laryngeal states create sociophonetic contrast, and harsh voice quality is paired with the portrayal of racial stereotypes of black people. These cases indicate exaggerated emotional states and are associated with yelling/shouting modes of expression. Overall, however, the functioning of harsh voice quality as it occurs in the data is broader and may involve aggressive posturing, comedic inversion of aggressiveness, vocal pathology, and vocal homage
  • Monaghan, P., Donnelly, S., Alcock, K., Bidgood, A., Cain, K., Durrant, S., Frost, R. L. A., Jago, L. S., Peter, M. S., Pine, J. M., Turnbull, H., & Rowland, C. F. (2023). Learning to generalise but not segment an artificial language at 17 months predicts children’s language skills 3 years later. Cognitive Psychology, 147: 101607. doi:10.1016/j.cogpsych.2023.101607.

    Abstract

    We investigated whether learning an artificial language at 17 months was predictive of children’s natural language vocabulary and grammar skills at 54 months. Children at 17 months listened to an artificial language containing non-adjacent dependencies, and were then tested on their learning to segment and to generalise the structure of the language. At 54 months, children were then tested on a range of standardised natural language tasks that assessed receptive and expressive vocabulary and grammar. A structural equation model demonstrated that learning the artificial language generalisation at 17 months predicted language abilities – a composite of vocabulary and grammar skills – at 54 months, whereas artificial language segmentation at 17 months did not predict language abilities at this age. Artificial language learning tasks – especially those that probe grammar learning – provide a valuable tool for uncovering the mechanisms driving children’s early language development.

    Additional information

    supplementary data
  • Morison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T. and 3 moreMorison, L., Meffert, E., Stampfer, M., Steiner-Wilke, I., Vollmer, B., Schulze, K., Briggs, T., Braden, R., Vogel, A. P., Thompson-Lake, D., Patel, C., Blair, E., Goel, H., Turner, S., Moog, U., Riess, A., Liegeois, F., Koolen, D. A., Amor, D. J., Kleefstra, T., Fisher, S. E., Zweier, C., & Morgan, A. T. (2023). In-depth characterisation of a cohort of individuals with missense and loss-of-function variants disrupting FOXP2. Journal of Medical Genetics, 60(6), 597-607. doi:10.1136/jmg-2022-108734.

    Abstract

    Background
    Heterozygous disruptions of FOXP2 were the first identified molecular cause for severe speech disorder; childhood apraxia of speech (CAS), yet few cases have been reported, limiting knowledge of the condition.

    Methods
    Here we phenotyped 29 individuals from 18 families with pathogenic FOXP2-only variants (13 loss-of-function, 5 missense variants; 14 males; aged 2 years to 62 years). Health and development (cognitive, motor, social domains) was examined, including speech and language outcomes with the first cross-linguistic analysis of English and German.

    Results
    Speech disorders were prevalent (24/26, 92%) and CAS was most common (23/26, 89%), with similar speech presentations across English and German. Speech was still impaired in adulthood and some speech sounds (e.g. ‘th’, ‘r’, ‘ch’, ‘j’) were never acquired. Language impairments (22/26, 85%) ranged from mild to severe. Comorbidities included feeding difficulties in infancy (10/27, 37%), fine (14/27, 52%) and gross (14/27, 52%) motor impairment, anxiety (6/28, 21%), depression (7/28, 25%), and sleep disturbance (11/15, 44%). Physical features were common (23/28, 82%) but with no consistent pattern. Cognition ranged from average to mildly impaired, and was incongruent with language ability; for example, seven participants with severe language disorder had average non-verbal cognition.

    Conclusions
    Although we identify increased prevalence of conditions like anxiety, depression and sleep disturbance, we confirm that the consequences of FOXP2 dysfunction remain relatively specific to speech disorder, as compared to other recently identified monogenic conditions associated with CAS. Thus, our findings reinforce that FOXP2 provides a valuable entrypoint for examining the neurobiological bases of speech disorder.
  • Muhinyi, A., & Rowland, C. F. (2023). Contributions of abstract extratextual talk and interactive style to preschoolers’ vocabulary development. Journal of Child Language, 50(1), 198-213. doi:10.1017/S0305000921000696.

    Abstract

    Caregiver abstract talk during shared reading predicts preschool-age children’s vocabulary development. However, previous research has focused on level of abstraction with less consideration of the style of extratextual talk. Here, we investigated the relation between these two dimensions of extratextual talk, and their contributions to variance in children’s vocabulary skills. Caregiver level of abstraction was associated with an interactive reading style. Controlling for socioeconomic status and child age, high interactivity predicted children’s concurrent vocabulary skills whereas abstraction did not. Controlling for earlier vocabulary skills, neither dimension of the extratextual talk predicted later vocabulary. Theoretical and practical relevance are discussed.
  • Mulder, K., Schreuder, R., & Dijkstra, T. (2013). Morphological family size effects in L1 and L2 processing: An electrophysiological study. Language and Cognitive Processes, 27, 1004-1035. doi:10.1080/01690965.2012.733013.

    Abstract

    The present study examined Morphological Family Size effects in first and second language processing. Items with a high or low Dutch (L1) Family Size were contrasted in four experiments involving Dutch–English bilinguals. In two experiments, reaction times (RTs) were collected in English (L2) and Dutch (L1) lexical decision tasks; in two other experiments, an L1 and L2 go/no-go lexical decision task were performed while Event-Related Potentials (ERPs) were recorded. Two questions were addressed. First, is the ERP signal sensitive to the morphological productivity of words? Second, does nontarget language activation in L2 processing spread beyond the item itself, to the morphological family of the activated nontarget word? The two behavioural experiments both showed a facilitatory effect of Dutch Family Size, indicating that the morphological family in the L1 is activated regardless of language context. In the two ERP experiments, Family Size effects were found to modulate the N400 component. Less negative waveforms were observed for words with a high L1 Family Size compared to words with a low L1 Family Size in the N400 time window, in both the L1 and L2 task. In addition, these Family Size effects persisted in later time windows. The data are discussed in light of the Morphological Family Resonance Model (MFRM) model of morphological processing and the BIA + model.
  • Nettle, D., Cronin, K. A., & Bateson, M. (2013). Responses of chimpanzees to cues of conspecific observation. Animal Behaviour, 86(3), 595-602. doi:10.1016/j.anbehav.2013.06.015.

    Abstract

    Recent evidence has shown that humans are remarkably sensitive to artificial cues of conspecific observation when making decisions with potential social consequences. Whether similar effects are found in other great apes has not yet been investigated. We carried out two experiments in which individual chimpanzees, Pan troglodytes, took items of food from an array in the presence of either an image of a large conspecific face or a scrambled control image. In experiment 1 we compared three versions of the face image varying in size and the amount of the face displayed. In experiment 2 we compared a fourth variant of the image with more prominent coloured eyes displayed closer to the focal chimpanzee. The chimpanzees did not look at the face images significantly more than at the control images in either experiment. Although there were trends for some individuals in each experiment to be slower to take high-value food items in the face conditions, these were not consistent or robust. We suggest that the extreme human sensitivity to cues of potential conspecific observation may not be shared with chimpanzees.
  • Newbury, D. F., Mari, F., Akha, E. S., MacDermot, K. D., Canitano, R., Monaco, A. P., Taylor, J. C., Renieri, A., Fisher, S. E., & Knight, S. J. L. (2013). Dual copy number variants involving 16p11 and 6q22 in a case of childhood apraxia of speech and pervasive developmental disorder. European Journal of Human Genetics, 21, 361-365. doi:10.1038/ejhg.2012.166.

    Abstract

    In this issue, Raca et al1 present two cases of childhood apraxia of speech (CAS) arising from microdeletions of chromosome 16p11.2. They propose that comprehensive phenotypic profiling may assist in the delineation and classification of such cases. To complement this study, we would like to report on a third, unrelated, child who presents with CAS and a chromosome 16p11.2 heterozygous deletion. We use genetic data from this child and his family to illustrate how comprehensive genetic profiling may also assist in the characterisation of 16p11.2 microdeletion syndrome.
  • Nieuwenhuis, I. L., Folia, V., Forkstam, C., Jensen, O., & Petersson, K. M. (2013). Sleep promotes the extraction of grammatical rules. PLoS One, 8(6): e65046. doi:10.1371/journal.pone.0065046.

    Abstract

    Grammar acquisition is a high level cognitive function that requires the extraction of complex rules. While it has been proposed that offline time might benefit this type of rule extraction, this remains to be tested. Here, we addressed this question using an artificial grammar learning paradigm. During a short-term memory cover task, eighty-one human participants were exposed to letter sequences generated according to an unknown artificial grammar. Following a time delay of 15 min, 12 h (wake or sleep) or 24 h, participants classified novel test sequences as Grammatical or Non-Grammatical. Previous behavioral and functional neuroimaging work has shown that classification can be guided by two distinct underlying processes: (1) the holistic abstraction of the underlying grammar rules and (2) the detection of sequence chunks that appear at varying frequencies during exposure. Here, we show that classification performance improved after sleep. Moreover, this improvement was due to an enhancement of rule abstraction, while the effect of chunk frequency was unaltered by sleep. These findings suggest that sleep plays a critical role in extracting complex structure from separate but related items during integrative memory processing. Our findings stress the importance of alternating periods of learning with sleep in settings in which complex information must be acquired.
  • Nieuwland, M. S. (2013). “If a lion could speak …”: Online sensitivity to propositional truth-value of unrealistic counterfactual sentences. Journal of Memory and Language, 68(1), 54-67. doi:10.1016/j.jml.2012.08.003.

    Abstract

    People can establish whether a sentence is hypothetically true even if what it describes can never be literally true given the laws of the natural world. Two event-related potential (ERP) experiments examined electrophysiological responses to sentences about unrealistic counterfactual worlds that require people to construct novel conceptual combinations and infer their consequences as the sentence unfolds in time (e.g., “If dogs had gills…”). Experiment 1 established that without this premise, described consequences (e.g., “Dobermans would breathe under water …”) elicited larger N400 responses than real-world true sentences. Incorporation of the counterfactual premise in Experiment 2 generated similar N400 effects of propositional truth-value in counterfactual and real-world sentences, suggesting that the counterfactual context eliminated the interpretive problems posed by locally anomalous sentences. This result did not depend on cloze probability of the sentences. In contrast to earlier findings regarding online comprehension of logical operators and counterfactuals, these results show that ongoing processing can be directly impacted by propositional truth-value, even that of unrealistic counterfactuals.
  • Nieuwland, M. S., Martin, A. E., & Carreiras, M. (2013). Event-related brain potential evidence for animacy processing asymmetries during sentence comprehension. Brain and Language, 126(2), 151-158. doi:10.1016/j.bandl.2013.04.005.

    Abstract

    The animacy distinction is deeply rooted in the language faculty. A key example is differential object marking, the phenomenon where animate sentential objects receive specific marking. We used event-related potentials to examine the neural processing consequences of case-marking violations on animate and inanimate direct objects in Spanish. Inanimate objects with incorrect prepositional case marker ‘a’ (‘al suelo’) elicited a P600 effect compared to unmarked objects, consistent with previous literature. However, animate objects without the required prepositional case marker (‘el obispo’) only elicited an N400 effect compared to marked objects. This novel finding, an exclusive N400 modulation by a straightforward grammatical rule violation, does not follow from extant neurocognitive models of sentence processing, and mirrors unexpected “semantic P600” effects for thematically problematic sentences. These results may reflect animacy asymmetry in competition for argument prominence: following the article, thematic interpretation difficulties are elicited only by unexpectedly animate objects.
  • Nomi, J. S., Frances, C., Nguyen, M. T., Bastidas, S., & Troup, L. J. (2013). Interaction of threat expressions and eye gaze: an event-related potential study. NeuroReport, 24, 813-817. doi:10.1097/WNR.0b013e3283647682.

    Abstract

    he current study examined the interaction of fearful, angry,
    happy, and neutral expressions with left, straight, and
    right eye gaze directions. Human participants viewed
    faces consisting of various expression and eye gaze
    combinations while event-related potential (ERP) data
    were collected. The results showed that angry expressions
    modulated the mean amplitude of the P1, whereas fearful
    and happy expressions modulated the mean amplitude of
    the N170. No influence of eye gaze on mean amplitudes for
    the P1 and N170 emerged. Fearful, angry, and happy
    expressions began to interact with eye gaze to influence
    mean amplitudes in the time window of 200–400 ms.
    The results suggest early processing of expression
    influence ERPs independent of eye gaze, whereas
    expression and gaze interact to influence later
    ERPs.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Specific facial signals associate with categories of social actions conveyed through questions. PLoS One, 18(7): e0288104. doi:10.1371/journal.pone.0288104.

    Abstract

    The early recognition of fundamental social actions, like questions, is crucial for understanding the speaker’s intended message and planning a timely response in conversation. Questions themselves may express more than one social action category (e.g., an information request “What time is it?”, an invitation “Will you come to my party?” or a criticism “Are you crazy?”). Although human language use occurs predominantly in a multimodal context, prior research on social actions has mainly focused on the verbal modality. This study breaks new ground by investigating how conversational facial signals may map onto the expression of different types of social actions conveyed through questions. The distribution, timing, and temporal organization of facial signals across social actions was analysed in a rich corpus of naturalistic, dyadic face-to-face Dutch conversations. These social actions were: Information Requests, Understanding Checks, Self-Directed questions, Stance or Sentiment questions, Other-Initiated Repairs, Active Participation questions, questions for Structuring, Initiating or Maintaining Conversation, and Plans and Actions questions. This is the first study to reveal differences in distribution and timing of facial signals across different types of social actions. The findings raise the possibility that facial signals may facilitate social action recognition during language processing in multimodal face-to-face interaction.

    Additional information

    supporting information
  • Nota, N., Trujillo, J. P., Jacobs, V., & Holler, J. (2023). Facilitating question identification through natural intensity eyebrow movements in virtual avatars. Scientific Reports, 13: 21295. doi:10.1038/s41598-023-48586-4.

    Abstract

    In conversation, recognizing social actions (similar to ‘speech acts’) early is important to quickly understand the speaker’s intended message and to provide a fast response. Fast turns are typical for fundamental social actions like questions, since a long gap can indicate a dispreferred response. In multimodal face-to-face interaction, visual signals may contribute to this fast dynamic. The face is an important source of visual signalling, and previous research found that prevalent facial signals such as eyebrow movements facilitate the rapid recognition of questions. We aimed to investigate whether early eyebrow movements with natural movement intensities facilitate question identification, and whether specific intensities are more helpful in detecting questions. Participants were instructed to view videos of avatars where the presence of eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) was manipulated, and to indicate whether the utterance in the video was a question or statement. Results showed higher accuracies for questions with eyebrow frowns, and faster response times for questions with eyebrow frowns and eyebrow raises. No additional effect was observed for the specific movement intensity. This suggests that eyebrow movements that are representative of naturalistic multimodal behaviour facilitate question recognition.
  • Nota, N., Trujillo, J. P., & Holler, J. (2023). Conversational eyebrow frowns facilitate question identification: An online study using virtual avatars. Cognitive Science, 47(12): e13392. doi:10.1111/cogs.13392.

    Abstract

    Conversation is a time-pressured environment. Recognizing a social action (the ‘‘speech act,’’ such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions since a long gap may be meaningful by itself. Human language is multimodal, involving speech as well as visual signals from the body, including the face. But little is known about how conversational facial signals contribute to the communication of social actions. Some of the most prominent facial signals in conversation are eyebrow movements. Previous studies found links between eyebrow movements and questions, suggesting that these facial signals could contribute to the rapid recognition of questions. Therefore, we aimed to investigate whether early eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) facilitate question identification. Participants were instructed to view videos of avatars where the presence of eyebrow movements accompanying questions was manipulated. Their task was to indicate whether the utterance was a question or a statement as accurately and quickly as possible. Data were collected using the online testing platform Gorilla. Results showed higher accuracies and faster response times for questions with eyebrow frowns, suggesting a facilitative role of eyebrow frowns for question identification. This means that facial signals can critically contribute to the communication of social actions in conversation by signaling social action-specific visual information and providing visual cues to speakers’ intentions.

    Additional information

    link to preprint
  • Nozais, V., Forkel, S. J., Petit, L., Talozzi, L., Corbetta, M., Thiebaut de Schotten, M., & Joliot, M. (2023). Atlasing white matter and grey matter joint contributions to resting-state networks in the human brain. Communications Biology, 6: 726. doi:10.1038/s42003-023-05107-3.

    Abstract

    Over the past two decades, the study of resting-state functional magnetic resonance imaging has revealed that functional connectivity within and between networks is linked to cognitive states and pathologies. However, the white matter connections supporting this connectivity remain only partially described. We developed a method to jointly map the white and grey matter contributing to each resting-state network (RSN). Using the Human Connectome Project, we generated an atlas of 30 RSNs. The method also highlighted the overlap between networks, which revealed that most of the brain’s white matter (89%) is shared between multiple RSNs, with 16% shared by at least 7 RSNs. These overlaps, especially the existence of regions shared by numerous networks, suggest that white matter lesions in these areas might strongly impact the communication within networks. We provide an atlas and an open-source software to explore the joint contribution of white and grey matter to RSNs and facilitate the study of the impact of white matter damage to these networks. In a first application of the software with clinical data, we were able to link stroke patients and impacted RSNs, showing that their symptoms aligned well with the estimated functions of the networks.
  • Numssen, O., van der Burght, C. L., & Hartwigsen, G. (2023). Revisiting the focality of non-invasive brain stimulation - implications for studies of human cognition. Neuroscience and Biobehavioral Reviews, 149: 105154. doi:10.1016/j.neubiorev.2023.105154.

    Abstract

    Non-invasive brain stimulation techniques are popular tools to investigate brain function in health and disease. Although transcranial magnetic stimulation (TMS) is widely used in cognitive neuroscience research to probe causal structure-function relationships, studies often yield inconclusive results. To improve the effectiveness of TMS studies, we argue that the cognitive neuroscience community needs to revise the stimulation focality principle – the spatial resolution with which TMS can differentially stimulate cortical regions. In the motor domain, TMS can differentiate between cortical muscle representations of adjacent fingers. However, this high degree of spatial specificity cannot be obtained in all cortical regions due to the influences of cortical folding patterns on the TMS-induced electric field. The region-dependent focality of TMS should be assessed a priori to estimate the experimental feasibility. Post-hoc simulations allow modeling of the relationship between cortical stimulation exposure and behavioral modulation by integrating data across stimulation sites or subjects.

    Files private

    Request files
  • Oliveira‑Stahl, G., Farboud, S., Sterling, M. L., Heckman, J. J., Van Raalte, B., Lenferink, D., Van der Stam, A., Smeets, C. J. L. M., Fisher, S. E., & Englitz, B. (2023). High-precision spatial analysis of mouse courtship vocalization behavior reveals sex and strain differences. Scientific Reports, 13: 5219. doi:10.1038/s41598-023-31554-3.

    Abstract

    Mice display a wide repertoire of vocalizations that varies with sex, strain, and context. Especially during social interaction, including sexually motivated dyadic interaction, mice emit sequences of ultrasonic vocalizations (USVs) of high complexity. As animals of both sexes vocalize, a reliable attribution of USVs to their emitter is essential. The state-of-the-art in sound localization for USVs in 2D allows spatial localization at a resolution of multiple centimeters. However, animals interact at closer ranges, e.g. snout-to-snout. Hence, improved algorithms are required to reliably assign USVs. We present a novel algorithm, SLIM (Sound Localization via Intersecting Manifolds), that achieves a 2–3-fold improvement in accuracy (13.1–14.3 mm) using only 4 microphones and extends to many microphones and localization in 3D. This accuracy allows reliable assignment of 84.3% of all USVs in our dataset. We apply SLIM to courtship interactions between adult C57Bl/6J wildtype mice and those carrying a heterozygous Foxp2 variant (R552H). The improved spatial accuracy reveals that vocalization behavior is dependent on the spatial relation between the interacting mice. Female mice vocalized more in close snout-to-snout interaction while male mice vocalized more when the male snout was in close proximity to the female's ano-genital region. Further, we find that the acoustic properties of the ultrasonic vocalizations (duration, Wiener Entropy, and sound level) are dependent on the spatial relation between the interacting mice as well as on the genotype. In conclusion, the improved attribution of vocalizations to their emitters provides a foundation for better understanding social vocal behaviors.

    Additional information

    supplementary movies and figures
  • Otake, T., & Cutler, A. (2013). Lexical selection in action: Evidence from spontaneous punning. Language and Speech, 56(4), 555-573. doi:10.1177/0023830913478933.

    Abstract

    Analysis of a corpus of spontaneously produced Japanese puns from a single speaker over a two-year period provides a view of how a punster selects a source word for a pun and transforms it into another word for humorous effect. The pun-making process is driven by a principle of similarity: the source word should as far as possible be preserved (in terms of segmental sequence) in the pun. This renders homophones (English example: band–banned) the pun type of choice, with part–whole relationships of embedding (cap–capture), and mutations of the source word (peas–bees) rather less favored. Similarity also governs mutations in that single-phoneme substitutions outnumber larger changes, and in phoneme substitutions, subphonemic features tend to be preserved. The process of spontaneous punning thus applies, on line, the same similarity criteria as govern explicit similarity judgments and offline decisions about pun success (e.g., for inclusion in published collections). Finally, the process of spoken-word recognition is word-play-friendly in that it involves multiple word-form activation and competition, which, coupled with known techniques in use in difficult listening conditions, enables listeners to generate most pun types as offshoots of normal listening procedures.
  • Özer, D., Karadöller, D. Z., Özyürek, A., & Göksun, T. (2023). Gestures cued by demonstratives in speech guide listeners' visual attention during spatial language comprehension. Journal of Experimental Psychology: General, 152(9), 2623-2635. doi:10.1037/xge0001402.

    Abstract

    Gestures help speakers and listeners during communication and thinking, particularly for visual-spatial information. Speakers tend to use gestures to complement the accompanying spoken deictic constructions, such as demonstratives, when communicating spatial information (e.g., saying “The candle is here” and gesturing to the right side to express that the candle is on the speaker's right). Visual information conveyed by gestures enhances listeners’ comprehension. Whether and how listeners allocate overt visual attention to gestures in different speech contexts is mostly unknown. We asked if (a) listeners gazed at gestures more when they complement demonstratives in speech (“here”) compared to when they express redundant information to speech (e.g., “right”) and (b) gazing at gestures related to listeners’ information uptake from those gestures. We demonstrated that listeners fixated gestures more when they expressed complementary than redundant information in the accompanying speech. Moreover, overt visual attention to gestures did not predict listeners’ comprehension. These results suggest that the heightened communicative value of gestures as signaled by external cues, such as demonstratives, guides listeners’ visual attention to gestures. However, overt visual attention does not seem to be necessary to extract the cued information from the multimodal message.
  • Ozturk, O., Shayan, S., Liszkowski, U., & Majid, A. (2013). Language is not necessary for color categories. Developmental Science, 16, 111-115. doi:10.1111/desc.12008.

    Abstract

    The origin of color categories is under debate. Some researchers argue that color categories are linguistically constructed, while others claim they have a pre-linguistic, and possibly even innate, basis. Although there is some evidence that 4–6-month-old infants respond categorically to color, these empirical results have been challenged in recent years. First, it has been claimed that previous demonstrations of color categories in infants may reflect color preferences instead. Second, and more seriously, other labs have reported failing to replicate the basic findings at all. In the current study we used eye-tracking to test 8-month-old infants’ categorical perception of a previously attested color boundary (green–blue) and an additional color boundary (blue–purple). Our results show that infants are faster and more accurate at fixating targets when they come from a different color category than when from the same category (even though the chromatic separation sizes were equated). This is the case for both blue–green and blue–purple. Our findings provide independent evidence for the existence of color categories in pre-linguistic infants, and suggest that categorical perception of color can occur without color language.
  • Parlatini, V., Itahashi, T., Lee, Y., Liu, S., Nguyen, T. T., Aoki, Y. Y., Forkel, S. J., Catani, M., Rubia, K., Zhou, J. H., Murphy, D. G., & Cortese, S. (2023). White matter alterations in Attention-Deficit/Hyperactivity Disorder (ADHD): a systematic review of 129 diffusion imaging studies with meta-analysis. Molecular Psychiatry, 28, 4098-4123. doi:10.1038/s41380-023-02173-1.

    Abstract

    Aberrant anatomical brain connections in attention-deficit/hyperactivity disorder (ADHD) are reported inconsistently across
    diffusion weighted imaging (DWI) studies. Based on a pre-registered protocol (Prospero: CRD42021259192), we searched PubMed,
    Ovid, and Web of Knowledge until 26/03/2022 to conduct a systematic review of DWI studies. We performed a quality assessment
    based on imaging acquisition, preprocessing, and analysis. Using signed differential mapping, we meta-analyzed a subset of the
    retrieved studies amenable to quantitative evidence synthesis, i.e., tract-based spatial statistics (TBSS) studies, in individuals of any
    age and, separately, in children, adults, and high-quality datasets. Finally, we conducted meta-regressions to test the effect of age,
    sex, and medication-naïvety. We included 129 studies (6739 ADHD participants and 6476 controls), of which 25 TBSS studies
    provided peak coordinates for case-control differences in fractional anisotropy (FA)(32 datasets) and 18 in mean diffusivity (MD)(23
    datasets). The systematic review highlighted white matter alterations (especially reduced FA) in projection, commissural and
    association pathways of individuals with ADHD, which were associated with symptom severity and cognitive deficits. The meta-
    analysis showed a consistent reduced FA in the splenium and body of the corpus callosum, extending to the cingulum. Lower FA
    was related to older age, and case-control differences did not survive in the pediatric meta-analysis. About 68% of studies were of
    low quality, mainly due to acquisitions with non-isotropic voxels or lack of motion correction; and the sensitivity analysis in high-
    quality datasets yielded no significant results. Findings suggest prominent alterations in posterior interhemispheric connections
    subserving cognitive and motor functions affected in ADHD, although these might be influenced by non-optimal acquisition
    parameters/preprocessing. Absence of findings in children may be related to the late development of callosal fibers, which may
    enhance case-control differences in adulthood. Clinicodemographic and methodological differences were major barriers to
    consistency and comparability among studies, and should be addressed in future investigations.
  • Passmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G. and 4 morePassmore, S., Barth, W., Greenhill, S. J., Quinn, K., Sheard, C., Argyriou, P., Birchall, J., Bowern, C., Calladine, J., Deb, A., Diederen, A., Metsäranta, N. P., Araujo, L. H., Schembri, R., Hickey-Hall, J., Honkola, T., Mitchell, A., Poole, L., Rácz, P. M., Roberts, S. G., Ross, R. M., Thomas-Colquhoun, E., Evans, N., & Jordan, F. M. (2023). Kinbank: A global database of kinship terminology. PLOS ONE, 18: e0283218. doi:10.1371/journal.pone.0283218.

    Abstract

    For a single species, human kinship organization is both remarkably diverse and strikingly organized. Kinship terminology is the structured vocabulary used to classify, refer to, and address relatives and family. Diversity in kinship terminology has been analyzed by anthropologists for over 150 years, although recurrent patterning across cultures remains incompletely explained. Despite the wealth of kinship data in the anthropological record, comparative studies of kinship terminology are hindered by data accessibility. Here we present Kinbank, a new database of 210,903 kinterms from a global sample of 1,229 spoken languages. Using open-access and transparent data provenance, Kinbank offers an extensible resource for kinship terminology, enabling researchers to explore the rich diversity of human family organization and to test longstanding hypotheses about the origins and drivers of recurrent patterns. We illustrate our contribution with two examples. We demonstrate strong gender bias in the phonological structure of parent terms across 1,022 languages, and we show that there is no evidence for a coevolutionary relationship between cross-cousin marriage and bifurcate-merging terminology in Bantu languages. Analysing kinship data is notoriously challenging; Kinbank aims to eliminate data accessibility issues from that challenge and provide a platform to build an interdisciplinary understanding of kinship.

    Additional information

    Supporting Information
  • Paulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M. and 5 morePaulat, N. S., Storer, J. M., Moreno-Santillán, D. D., Osmanski, A. B., Sullivan, K. A. M., Grimshaw, J. R., Korstian, J., Halsey, M., Garcia, C. J., Crookshanks, C., Roberts, J., Smit, A. F. A., Hubley, R., Rosen, J., Teeling, E. C., Vernes, S. C., Myers, E., Pippel, M., Brown, T., Hiller, M., Zoonomia Consortium, Rojas, D., Dávalos, L. M., Lindblad-Toh, K., Karlsson, E. K., & Ray, D. A. (2023). Chiropterans are a hotspot for horizontal transfer of DNA transposons in mammalia. Molecular Biology and Evolution, 40(5): msad092. doi:10.1093/molbev/msad092.

    Abstract

    Horizontal transfer of transposable elements (TEs) is an important mechanism contributing to genetic diversity and innovation. Bats (order Chiroptera) have repeatedly been shown to experience horizontal transfer of TEs at what appears to be a high rate compared with other mammals. We investigated the occurrence of horizontally transferred (HT) DNA transposons involving bats. We found over 200 putative HT elements within bats; 16 transposons were shared across distantly related mammalian clades, and 2 other elements were shared with a fish and two lizard species. Our results indicate that bats are a hotspot for horizontal transfer of DNA transposons. These events broadly coincide with the diversification of several bat clades, supporting the hypothesis that DNA transposon invasions have contributed to genetic diversification of bats.

    Additional information

    supplemental methods supplemental tables
  • Peeters, D., Dijkstra, T., & Grainger, J. (2013). The representation and processing of identical cognates by late bilinguals: RT and ERP effects. Journal of Memory and Language, 68, 315-332. doi:10.1016/j.jml.2012.12.003.

    Abstract

    Across the languages of a bilingual, translation equivalents can have the same orthographic form and shared meaning (e.g., TABLE in French and English). How such words, called orthographically identical cognates, are processed and represented in the bilingual brain is not well understood. In the present study, late French–English bilinguals processed such identical cognates and control words in an English lexical decision task. Both behavioral and electrophysiological data were collected. Reaction times to identical cognates were shorter than for non-cognate controls and depended on both English and French frequency. Cognates with a low English frequency showed a larger cognate advantage than those with a high English frequency. In addition, N400 amplitude was found to be sensitive to cognate status and both the English and French frequency of the cognate words. Theoretical consequences for the processing and representation of identical cognates are discussed.
  • Pender, R., Fearon, P., St Pourcain, B., Heron, J., & Mandy, W. (2023). Developmental trajectories of autistic social traits in the general population. Psychological Medicine, 53(3), 814-822. doi:10.1017/S0033291721002166.

    Abstract

    Background

    Autistic people show diverse trajectories of autistic traits over time, a phenomenon labelled ‘chronogeneity’. For example, some show a decrease in symptoms, whilst others experience an intensification of difficulties. Autism spectrum disorder (ASD) is a dimensional condition, representing one end of a trait continuum that extends throughout the population. To date, no studies have investigated chronogeneity across the full range of autistic traits. We investigated the nature and clinical significance of autism trait chronogeneity in a large, general population sample.
    Methods

    Autistic social/communication traits (ASTs) were measured in the Avon Longitudinal Study of Parents and Children using the Social and Communication Disorders Checklist (SCDC) at ages 7, 10, 13 and 16 (N = 9744). We used Growth Mixture Modelling (GMM) to identify groups defined by their AST trajectories. Measures of ASD diagnosis, sex, IQ and mental health (internalising and externalising) were used to investigate external validity of the derived trajectory groups.
    Results

    The selected GMM model identified four AST trajectory groups: (i) Persistent High (2.3% of sample), (ii) Persistent Low (83.5%), (iii) Increasing (7.3%) and (iv) Decreasing (6.9%) trajectories. The Increasing group, in which females were a slight majority (53.2%), showed dramatic increases in SCDC scores during adolescence, accompanied by escalating internalising and externalising difficulties. Two-thirds (63.6%) of the Decreasing group were male.
    Conclusions

    Clinicians should note that for some young people autism-trait-like social difficulties first emerge during adolescence accompanied by problems with mood, anxiety, conduct and attention. A converse, majority-male group shows decreasing social difficulties during adolescence.
  • Perlman, M., & Gibbs, R. W. (2013). Pantomimic gestures reveal the sensorimotor imagery of a human-fostered gorilla. Journal of Mental Imagery, 37(3/4), 73-96.

    Abstract

    This article describes the use of pantomimic gestures by the human-fostered gorilla, Koko, as evidence of her sensorimotor imagery. We present five video recorded instances of Koko's spontaneously created pantomimes during her interactions with human caregivers. The precise movements and context of each gesture are described in detail to examine how it functions to communicate Koko's requests for various objects and actions to be performed. Analysis assess the active "iconicity" of each targeted gesture and examines the underlying elements of sensorimotor imagery that are incorporated by the gesture. We suggest that Koko's pantomimes reflect an imaginative understanding of different actions, objects, and events that is similar in important respects with humans' embodied imagery capabilities.
  • Petzell, M., & Hammarström, H. (2013). Grammatical and lexical subclassification of the Morogoro region, Tanzania. Nordic journal of African Studies, 22(3), 129-157.

    Abstract

    This article discusses lexical and grammatical comparison and sub-grouping in a set of closely related Bantu language varieties in the Morogoro region, Tanzania. The Greater Ruvu Bantu language varieties include Kagulu [G12], Zigua [G31], Kwere [G32], Zalamo [G33], Nguu [G34], Luguru [G35], Kami [G36] and Kutu [G37]. The comparison is based on 27 morphophonological and morphosyntactic parameters, supplemented by a lexicon of 500 items. In order to determine the relationships and boundaries between the varieties, grammatical phenomena constitute a valuable complement to counting the number of identical words or cognates. We have used automated cognate judgment methods, as well as manual cognate judgments based on older sources, in order to compare lexical data. Finally, we have included speaker attitudes (i.e. self-assessment of linguistic similarity) in an attempt to map whether the languages that are perceived by speakers as being linguistically similar really are closely related.
  • Piai, V., & Eikelboom, D. (2023). Brain areas critical for picture naming: A systematic review and meta-analysis of lesion-symptom mapping studies. Neurobiology of Language, 4(2), 280-296. doi:10.1162/nol_a_00097.

    Abstract

    Lesion-symptom mapping (LSM) studies have revealed brain areas critical for naming, typically finding significant associations between damage to left temporal, inferior parietal, and inferior fontal regions and impoverished naming performance. However, specific subregions found in the available literature vary. Hence, the aim of this study was to perform a systematic review and meta-analysis of published lesion-based findings, obtained from studies with unique cohorts investigating brain areas critical for accuracy in naming in stroke patients at least 1 month post-onset. An anatomic likelihood estimation (ALE) meta-analysis of these LSM studies was performed. Ten papers entered the ALE meta-analysis, with similar lesion coverage over left temporal and left inferior frontal areas. This small number is a major limitation of the present study. Clusters were found in left anterior temporal lobe, posterior temporal lobe extending into inferior parietal areas, in line with the arcuate fasciculus, and in pre- and postcentral gyri and middle frontal gyrus. No clusters were found in left inferior frontal gyrus. These results were further substantiated by examining five naming studies that investigated performance beyond global accuracy, corroborating the ALE meta-analysis results. The present review and meta-analysis highlight the involvement of left temporal and inferior parietal cortices in naming, and of mid to posterior portions of the temporal lobe in particular in conceptual-lexical retrieval for speaking.

    Additional information

    data
  • Piai, V., Roelofs, A., Acheson, D. J., & Takashima, A. (2013). Attention for speaking: Neural substrates of general and specific mechanisms for monitoring and control. Frontiers in Human Neuroscience, 7: 832. doi:10.3389/fnhum.2013.00832.

    Abstract

    Accumulating evidence suggests that some degree of attentional control is required to regulate and monitor processes underlying speaking. Although progress has been made in delineating the neural substrates of the core language processes involved in speaking, substrates associated with regulatory and monitoring processes have remained relatively underspecified. We report the results of an fMRI study examining the neural substrates related to performance in three attention-demanding tasks varying in the amount of linguistic processing: vocal picture naming while ignoring distractors (picture-word interference, PWI); vocal color naming while ignoring distractors (Stroop); and manual object discrimination while ignoring spatial position (Simon task). All three tasks had congruent and incongruent stimuli, while PWI and Stroop also had neutral stimuli. Analyses focusing on common activation across tasks identified a portion of the dorsal anterior cingulate cortex (ACC) that was active in incongruent trials for all three tasks, suggesting that this region subserves a domain-general attentional control function. In the language tasks, this area showed increased activity for incongruent relative to congruent stimuli, consistent with the involvement of domain-general mechanisms of attentional control in word production. The two language tasks also showed activity in anterior-superior temporal gyrus (STG). Activity increased for neutral PWI stimuli (picture and word did not share the same semantic category) relative to incongruent (categorically related) and congruent stimuli. This finding is consistent with the involvement of language-specific areas in word production, possibly related to retrieval of lexical-semantic information from memory. The current results thus suggest that in addition to engaging language-specific areas for core linguistic processes, speaking also engages the ACC, a region that is likely implementing domain-general attentional control.
  • Piai, V., Meyer, L., Schreuder, R., & Bastiaansen, M. C. M. (2013). Sit down and read on: Working memory and long-term memory in particle-verb processing. Brain and Language, 127(2), 296-306. doi:10.1016/j.bandl.2013.09.015.

    Abstract

    Particle verbs (e.g., look up) are lexical items for which particle and verb share a single lexical entry. Using event-related brain potentials, we examined working memory and long-term memory involvement in particle-verb processing. Dutch participants read sentences with head verbs that allow zero, two, or more than five particles to occur downstream. Additionally, sentences were presented for which the encountered particle was semantically plausible, semantically implausible, or forming a non-existing particle verb. An anterior negativity was observed at the verbs that potentially allow for a particle downstream relative to verbs that do not, possibly indexing storage of the verb until the dependency with its particle can be closed. Moreover, a graded N400 was found at the particle (smallest amplitude for plausible particles and largest for particles forming non-existing particle verbs), suggesting that lexical access to a shared lexical entry occurred at two separate time points.
  • Piai, V., & Roelofs, A. (2013). Working memory capacity and dual-task interference in picture naming. Acta Psychologica, 142, 332-342. doi:10.1016/j.actpsy.2013.01.006.
  • Pijls, F., & Kempen, G. (1986). Een psycholinguïstisch model voor grammatische samentrekking. De Nieuwe Taalgids, 79, 217-234.
  • St Pourcain, B., Whitehouse, A. J. O., Ang, W. Q., Warrington, N. M., Glessner, J. T., Wang, K., Timpson, N. J., Evans, D. M., Kemp, J. P., Ring, S. M., McArdle, W. L., Golding, J., Hakonarson, H., Pennell, C. E., & Smith, G. (2013). Common variation contributes to the genetic architecture of social communication traits. Molecular Autism, 4: 34. doi:10.1186/2040-2392-4-34.

    Abstract

    Background: Social communication difficulties represent an autistic trait that is highly heritable and persistent during the course of development. However, little is known about the underlying genetic architecture of this phenotype. Methods: We performed a genome-wide association study on parent-reported social communication problems using items of the children’s communication checklist (age 10 to 11 years) studying single and/or joint marker effects. Analyses were conducted in a large UK population-based birth cohort (Avon Longitudinal Study of Parents and their Children, ALSPAC, N = 5,584) and followed-up within a sample of children with comparable measures from Western Australia (RAINE, N = 1364). Results: Two of our seven independent top signals (P- discovery <1.0E-05) were replicated (0.009 < P- replication ≤0.02) within RAINE and suggested evidence for association at 6p22.1 (rs9257616, meta-P = 2.5E-07) and 14q22.1 (rs2352908, meta-P = 1.1E-06). The signal at 6p22.1 was identified within the olfactory receptor gene cluster within the broader major histocompatibility complex (MHC) region. The strongest candidate locus within this genomic area was TRIM27. This gene encodes an ubiquitin E3 ligase, which is an interaction partner of methyl-CpG-binding domain (MBD) proteins, such as MBD3 and MBD4, and rare protein-coding mutations within MBD3 and MBD4 have been linked to autism. The signal at 14q22.1 was found within a gene-poor region. Single-variant findings were complemented by estimations of the narrow-sense heritability in ALSPAC suggesting that approximately a fifth of the phenotypic variance in social communication traits is accounted for by joint additive effects of genotyped single nucleotide polymorphisms throughout the genome (h2(SE) = 0.18(0.066), P = 0.0027). Conclusion: Overall, our study provides both joint and single-SNP-based evidence for the contribution of common polymorphisms to variation in social communication phenotypes.
  • Praamstra, P., Meyer, A. S., & Levelt, W. J. M. (1994). Neurophysiological manifestations of auditory phonological processing: Latency variation of a negative ERP component timelocked to phonological mismatch. Journal of Cognitive Neuroscience, 6(3), 204-219. doi:10.1162/jocn.1994.6.3.204.

    Abstract

    Two experiments examined phonological priming effects on reaction times, error rates, and event-related brain potential (ERP) measures in an auditory lexical decision task. In Experiment 1 related prime-target pairs rhymed, and in Experiment 2 they alliterated (i.e., shared the consonantal onset and vowel). Event-related potentials were recorded in a delayed response task. Reaction times and error rates were obtained both for the delayed and an immediate response task. The behavioral data of Experiment 1 provided evidence for phonological facilitation of word, but not of nonword decisions. The brain potentials were more negative to unrelated than to rhyming word-word pairs between 450 and 700 msec after target onset. This negative enhancement was not present for word-nonword pairs. Thus, the ERP results match the behavioral data. The behavioral data of Experiment 2 provided no evidence for phonological Facilitation. However, between 250 and 450 msec after target onset, i.e., considerably earlier than in Experiment 1, brain potentials were more negative for unrelated than for alliterating word and word-nonword pairs. It is argued that the ERP effects in the two experiments could be modulations of the same underlying component, possibly the N400. The difference in the timing of the effects is likely to be due to the fact that the shared segments in related stimulus pairs appeared in different word positions in the two experiments.
  • Quaresima, A., Fitz, H., Duarte, R., Van den Broek, D., Hagoort, P., & Petersson, K. M. (2023). The Tripod neuron: A minimal structural reduction of the dendritic tree. The Journal of Physiology, 601(15), 3007-3437. doi:10.1113/JP283399.

    Abstract

    Neuron models with explicit dendritic dynamics have shed light on mechanisms for coincidence detection, pathway selection and temporal filtering. However, it is still unclear which morphological and physiological features are required to capture these phenomena. In this work, we introduce the Tripod neuron model and propose a minimal structural reduction of the dendritic tree that is able to reproduce these computations. The Tripod is a three-compartment model consisting of two segregated passive dendrites and a somatic compartment modelled as an adaptive, exponential integrate-and-fire neuron. It incorporates dendritic geometry, membrane physiology and receptor dynamics as measured in human pyramidal cells. We characterize the response of the Tripod to glutamatergic and GABAergic inputs and identify parameters that support supra-linear integration, coincidence-detection and pathway-specific gating through shunting inhibition. Following NMDA spikes, the Tripod neuron generates plateau potentials whose duration depends on the dendritic length and the strength of synaptic input. When fitted with distal compartments, the Tripod encodes previous activity into a dendritic depolarized state. This dendritic memory allows the neuron to perform temporal binding, and we show that it solves transition and sequence detection tasks on which a single-compartment model fails. Thus, the Tripod can account for dendritic computations previously explained only with more detailed neuron models or neural networks. Due to its simplicity, the Tripod neuron can be used efficiently in simulations of larger cortical circuits.
  • Raghavan, R., Raviv, L., & Peeters, D. (2023). What's your point? Insights from virtual reality on the relation between intention and action in the production of pointing gestures. Cognition, 240: 105581. doi:10.1016/j.cognition.2023.105581.

    Abstract

    Human communication involves the process of translating intentions into communicative actions. But how exactly do our intentions surface in the visible communicative behavior we display? Here we focus on pointing gestures, a fundamental building block of everyday communication, and investigate whether and how different types of underlying intent modulate the kinematics of the pointing hand and the brain activity preceding the gestural movement. In a dynamic virtual reality environment, participants pointed at a referent to either share attention with their addressee, inform their addressee, or get their addressee to perform an action. Behaviorally, it was observed that these different underlying intentions modulated how long participants kept their arm and finger still, both prior to starting the movement and when keeping their pointing hand in apex position. In early planning stages, a neurophysiological distinction was observed between a gesture that is used to share attitudes and knowledge with another person versus a gesture that mainly uses that person as a means to perform an action. Together, these findings suggest that our intentions influence our actions from the earliest neurophysiological planning stages to the kinematic endpoint of the movement itself.
  • Raimondi, T., Di Panfilo, G., Pasquali, M., Zarantonello, M., Favaro, L., Savini, T., Gamba, M., & Ravignani, A. (2023). Isochrony and rhythmic interaction in ape duetting. Proceedings of the Royal Society B: Biological Sciences, 290: 20222244. doi:10.1098/rspb.2022.2244.

    Abstract

    How did rhythm originate in humans, and other species? One cross-cultural universal, frequently found in human music, is isochrony: when note onsets repeat regularly like the ticking of a clock. Another universal consists in synchrony (e.g. when individuals coordinate their notes so that they are sung at the same time). An approach to biomusicology focuses on similarities and differences across species, trying to build phylogenies of musical traits. Here we test for the presence of, and a link between, isochrony and synchrony in a non-human animal. We focus on the songs of one of the few singing primates, the lar gibbon (Hylobates lar), extracting temporal features from their solo songs and duets. We show that another ape exhibits one rhythmic feature at the core of human musicality: isochrony. We show that an enhanced call rate overall boosts isochrony, suggesting that respiratory physiological constraints play a role in determining the song's rhythmic structure. However, call rate alone cannot explain the flexible isochrony we witness. Isochrony is plastic and modulated depending on the context of emission: gibbons are more isochronous when duetting than singing solo. We present evidence for rhythmic interaction: we find statistical causality between one individual's note onsets and the co-singer's onsets, and a higher than chance degree of synchrony in the duets. Finally, we find a sex-specific trade-off between individual isochrony and synchrony. Gibbon's plasticity for isochrony and rhythmic overlap may suggest a potential shared selective pressure for interactive vocal displays in singing primates. This pressure may have convergently shaped human and gibbon musicality while acting on a common neural primate substrate. Beyond humans, singing primates are promising models to understand how music and, specifically, a sense of rhythm originated in the primate phylogeny.
  • Rasenberg, M., Amha, A., Coler, M., van Koppen, M., van Miltenburg, E., de Rijk, L., Stommel, W., & Dingemanse, M. (2023). Reimagining language: Towards a better understanding of language by including our interactions with non-humans. Linguistics in the Netherlands, 40, 309-317. doi:10.1075/avt.00095.ras.

    Abstract

    What is language and who or what can be said to have it? In this essay we consider this question in the context of interactions with non-humans, specifically: animals and computers. While perhaps an odd pairing at first glance, here we argue that these domains can offer contrasting perspectives through which we can explore and reimagine language. The interactions between humans and animals, as well as between humans and computers, reveal both the essence and the boundaries of language: from examining the role of sequence and contingency in human-animal interaction, to unravelling the challenges of natural interactions with “smart” speakers and language models. By bringing together disparate fields around foundational questions, we push the boundaries of linguistic inquiry and uncover new insights into what language is and how it functions in diverse non-humanexclusive contexts.
  • Rasing, N. B., Van de Geest-Buit, W., Chan, O. Y. A., Mul, K., Lanser, A., Erasmus, C. E., Groothuis, J. T., Holler, J., Ingels, K. J. A. O., Post, B., Siemann, I., & Voermans, N. C. (2023). Psychosocial functioning in patients with altered facial expression: A scoping review in five neurological diseases. Disability and Rehabilitation. Advance online publication. doi:10.1080/09638288.2023.2259310.

    Abstract

    Purpose

    To perform a scoping review to investigate the psychosocial impact of having an altered facial expression in five neurological diseases.
    Methods

    A systematic literature search was performed. Studies were on Bell’s palsy, facioscapulohumeral muscular dystrophy (FSHD), Moebius syndrome, myotonic dystrophy type 1, or Parkinson’s disease patients; had a focus on altered facial expression; and had any form of psychosocial outcome measure. Data extraction focused on psychosocial outcomes.
    Results

    Bell’s palsy, myotonic dystrophy type 1, and Parkinson’s disease patients more often experienced some degree of psychosocial distress than healthy controls. In FSHD, facial weakness negatively influenced communication and was experienced as a burden. The psychosocial distress applied especially to women (Bell’s palsy and Parkinson’s disease), and patients with more severely altered facial expression (Bell’s palsy), but not for Moebius syndrome patients. Furthermore, Parkinson’s disease patients with more pronounced hypomimia were perceived more negatively by observers. Various strategies were reported to compensate for altered facial expression.
    Conclusions

    This review showed that patients with altered facial expression in four of five included neurological diseases had reduced psychosocial functioning. Future research recommendations include studies on observers’ judgements of patients during social interactions and on the effectiveness of compensation strategies in enhancing psychosocial functioning.
    Implications for rehabilitation

    Negative effects of altered facial expression on psychosocial functioning are common and more abundant in women and in more severely affected patients with various neurological disorders.

    Health care professionals should be alert to psychosocial distress in patients with altered facial expression.

    Learning of compensatory strategies could be a beneficial therapy for patients with psychosocial distress due to an altered facial expression.
  • Ravignani, A., & Herbst, C. T. (2023). Voices in the ocean: Toothed whales evolved a third way of making sounds similar to that of land mammals and birds. Science, 379(6635), 881-882. doi:10.1126/science.adg5256.
  • Ravignani, A., Sonnweber, R.-S., Stobbe, N., & Fitch, W. T. (2013). Action at a distance: Dependency sensitivity in a New World primate. Biology Letters, 9(6): 0130852. doi:10.1098/rsbl.2013.0852.

    Abstract

    Sensitivity to dependencies (correspondences between distant items) in sensory stimuli plays a crucial role in human music and language. Here, we show that squirrel monkeys (Saimiri sciureus) can detect abstract, non-adjacent dependencies in auditory stimuli. Monkeys discriminated between tone sequences containing a dependency and those lacking it, and generalized to previously unheard pitch classes and novel dependency distances. This constitutes the first pattern learning study where artificial stimuli were designed with the species' communication system in mind. These results suggest that the ability to recognize dependencies represents a capability that had already evolved in humans’ last common ancestor with squirrel monkeys, and perhaps before.
  • Ravignani, A., Olivera, M. V., Gingras, B., Hofer, R., Hernandez, R. C., Sonnweber, R. S., & Fitch, T. W. (2013). Primate drum kit: A system for studying acoustic pattern production by non-human primates using acceleration and strain sensors. Sensors, 13(8), 9790-9820. doi:10.3390/s130809790.

    Abstract

    The possibility of achieving experimentally controlled, non-vocal acoustic production in non-human primates is a key step to enable the testing of a number of hypotheses on primate behavior and cognition. However, no device or solution is currently available, with the use of sensors in non-human animals being almost exclusively devoted to applications in food industry and animal surveillance. Specifically, no device exists which simultaneously allows: (i) spontaneous production of sound or music by non-human animals via object manipulation, (ii) systematical recording of data sensed from these movements, (iii) the possibility to alter the acoustic feedback properties of the object using remote control. We present two prototypes we developed for application with chimpanzees (Pan troglodytes) which, while fulfilling the aforementioned requirements, allow to arbitrarily associate sounds to physical object movements. The prototypes differ in sensing technology, costs, intended use and construction requirements. One prototype uses four piezoelectric elements embedded between layers of Plexiglas and foam. Strain data is sent to a computer running Python through an Arduino board. A second prototype consists in a modified Wii Remote contained in a gum toy. Acceleration data is sent via Bluetooth to a computer running Max/MSP. We successfully pilot tested the first device with a group of chimpanzees. We foresee using these devices for a range of cognitive experiments.
  • Raviv, L., Jacobson, S. L., Plotnik, J. M., Bowman, J., Lynch, V., & Benítez-Burraco, A. (2023). Elephants as an animal model for self-domestication. Proceedings of the National Academy of Sciences of the United States of America, 120(15): e2208607120. doi:10.1073/pnas.2208607120.

    Abstract

    Humans are unique in their sophisticated culture and societal structures, their complex languages, and their extensive tool use. According to the human self-domestication hypothesis, this unique set of traits may be the result of an evolutionary process of self-induced domestication, in which humans evolved to be less aggressive and more cooperative. However, the only other species that has been argued to be self-domesticated besides humans so far is bonobos, resulting in a narrow scope for investigating this theory limited to the primate order. Here, we propose an animal model for studying self-domestication: the elephant. First, we support our hypothesis with an extensive cross-species comparison, which suggests that elephants indeed exhibit many of the features associated with self-domestication (e.g., reduced aggression, increased prosociality, extended juvenile period, increased playfulness, socially regulated cortisol levels, and complex vocal behavior). Next, we present genetic evidence to reinforce our proposal, showing that genes positively selected in elephants are enriched in pathways associated with domestication traits and include several candidate genes previously associated with domestication. We also discuss several explanations for what may have triggered a self-domestication process in the elephant lineage. Our findings support the idea that elephants, like humans and bonobos, may be self-domesticated. Since the most recent common ancestor of humans and elephants is likely the most recent common ancestor of all placental mammals, our findings have important implications for convergent evolution beyond the primate taxa, and constitute an important advance toward understanding how and why self-domestication shaped humans’ unique cultural niche.

    Additional information

    supporting information
  • Reesink, G. (2013). Expressing the GIVE event in Papuan languages: A preliminary survey. Linguistic Typology, 17(2), 217-266. doi:10.1515/lity-2013-0010.

    Abstract

    The linguistic expression of the GIVE event is investigated in a sample of 72 Papuan languages, 33 belonging to the Trans New Guinea family, 39 of various non-TNG lineages. Irrespective of the verbal template (prefix, suffix, or no indexation of undergoer), in the majority of languages the recipient is marked as the direct object of a monotransitive verb, which sometimes involves stem suppletion for the recipient. While a few languages allow verbal affixation for all three arguments, a number of languages challenge the universal claim that the `give' verb always has three arguments.
  • Regier, T., Khetarpal, N., & Majid, A. (2013). Inferring semantic maps. Linguistic Typology, 17, 89-105. doi:10.1515/lity-2013-0003.

    Abstract

    Semantic maps are a means of representing universal structure underlying crosslanguage semantic variation. However, no algorithm has existed for inferring a graph-based semantic map from data. Here, we note that this open problem is formally identical to the known problem of inferring a social network from disease outbreaks. From this identity it follows that semantic map inference is computationally intractable, but that an efficient approximation algorithm for it exists. We demonstrate that this algorithm produces sensible semantic maps from two existing bodies of data. We conclude that universal semantic graph structure can be automatically approximated from cross-language semantic data.
  • Reinisch, E., Weber, A., & Mitterer, H. (2013). Listeners retune phoneme categories across languages. Journal of Experimental Psychology: Human Perception and Performance, 39, 75-86. doi:10.1037/a0027979.

    Abstract

    Native listeners adapt to noncanonically produced speech by retuning phoneme boundaries by means of lexical knowledge. We asked whether a second language lexicon can also guide category retuning and whether perceptual learning transfers from a second language (L2) to the native language (L1). During a Dutch lexical-decision task, German and Dutch listeners were exposed to unusual pronunciation variants in which word-final /f/ or /s/ was replaced by an ambiguous sound. At test, listeners categorized Dutch minimal word pairs ending in sounds along an /f/–/s/ continuum. Dutch L1 and German L2 listeners showed boundary shifts of a similar magnitude. Moreover, following exposure to Dutch-accented English, Dutch listeners also showed comparable effects of category retuning when they heard the same speaker speak her native language (Dutch) during the test. The former result suggests that lexical representations in a second language are specific enough to support lexically guided retuning, and the latter implies that production patterns in a second language are deemed a stable speaker characteristic likely to transfer to the native language; thus retuning of phoneme categories applies across languages.
  • Reinisch, E., & Sjerps, M. J. (2013). The uptake of spectral and temporal cues in vowel perception is rapidly influenced by context. Journal of Phonetics, 41, 101-116. doi:10.1016/j.wocn.2013.01.002.

    Abstract

    Speech perception is dependent on auditory information within phonemes such as spectral or temporal cues. The perception of those cues, however, is affected by auditory information in surrounding context (e.g., a fast context sentence can make a target vowel sound subjectively longer). In a two-by-two design the current experiments investigated when these different factors influence vowel perception. Dutch listeners categorized minimal word pairs such as /tɑk/–/taːk/ (“branch”–“task”) embedded in a context sentence. Critically, the Dutch /ɑ/–/aː/ contrast is cued by spectral and temporal information. We varied the second formant (F2) frequencies and durations of the target vowels. Independently, we also varied the F2 and duration of all segments in the context sentence. The timecourse of cue uptake on the targets was measured in a printed-word eye-tracking paradigm. Results show that the uptake of spectral cues slightly precedes the uptake of temporal cues. Furthermore, acoustic manipulations of the context sentences influenced the uptake of cues in the target vowel immediately. That is, listeners did not need additional time to integrate spectral or temporal cues of a target sound with auditory information in the context. These findings argue for an early locus of contextual influences in speech perception.
  • Reinisch, E., Jesse, A., & Nygaard, L. C. (2013). Tone of voice guides word learning in informative referential contexts. Quarterly Journal of Experimental Psychology, 66, 1227-1240. doi:10.1080/17470218.2012.736525.

    Abstract

    Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., “daxen”) spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant-ant for big-small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalise them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.
  • Riedel, M., Wittenburg, P., Reetz, J., van de Sanden, M., Rybicki, J., von Vieth, B. S., Fiameni, G., Mariani, G., Michelini, A., Cacciari, C., Elbers, W., Broeder, D., Verkerk, R., Erastova, E., Lautenschlaeger, M., Budich, R. G., Thielmann, H., Coveney, P., Zasada, S., Haidar, A. and 9 moreRiedel, M., Wittenburg, P., Reetz, J., van de Sanden, M., Rybicki, J., von Vieth, B. S., Fiameni, G., Mariani, G., Michelini, A., Cacciari, C., Elbers, W., Broeder, D., Verkerk, R., Erastova, E., Lautenschlaeger, M., Budich, R. G., Thielmann, H., Coveney, P., Zasada, S., Haidar, A., Buechner, O., Manzano, C., Memon, S., Memon, S., Helin, H., Suhonen, J., Lecarpentier, D., Koski, K., & Lippert, T. (2013). A data infrastructure reference model with applications: Towards realization of a ScienceTube vision with a data replication service. Journal of Internet Services and Applications, 4, 1-17. doi:10.1186/1869-0238-4-1.

    Abstract

    The wide variety of scientific user communities work with data since many years and thus have already a wide variety of data infrastructures in production today. The aim of this paper is thus not to create one new general data architecture that would fail to be adopted by each and any individual user community. Instead this contribution aims to design a reference model with abstract entities that is able to federate existing concrete infrastructures under one umbrella. A reference model is an abstract framework for understanding significant entities and relationships between them and thus helps to understand existing data infrastructures when comparing them in terms of functionality, services, and boundary conditions. A derived architecture from such a reference model then can be used to create a federated architecture that builds on the existing infrastructures that could align to a major common vision. This common vision is named as ’ScienceTube’ as part of this contribution that determines the high-level goal that the reference model aims to support. This paper will describe how a well-focused use case around data replication and its related activities in the EUDAT project aim to provide a first step towards this vision. Concrete stakeholder requirements arising from scientific end users such as those of the European Strategy Forum on Research Infrastructure (ESFRI) projects underpin this contribution with clear evidence that the EUDAT activities are bottom-up thus providing real solutions towards the so often only described ’high-level big data challenges’. The followed federated approach taking advantage of community and data centers (with large computational resources) further describes how data replication services enable data-intensive computing of terabytes or even petabytes of data emerging from ESFRI projects.
  • Rietveld, C. A., Medland, S. E., Derringer, J., Yang, J., Esko, T., Martin, N. W., Westra, H.-J., Shakhbazov, K., Abdellaoui, A., Agrawal, A., Albrecht, E., Alizadeh, B. Z., Amin, N., Barnard, J., Baumeister, S. E., Benke, K. S., Bielak, L. F., Boatman, J. A., Boyle, P. A., Davies, G. and 184 moreRietveld, C. A., Medland, S. E., Derringer, J., Yang, J., Esko, T., Martin, N. W., Westra, H.-J., Shakhbazov, K., Abdellaoui, A., Agrawal, A., Albrecht, E., Alizadeh, B. Z., Amin, N., Barnard, J., Baumeister, S. E., Benke, K. S., Bielak, L. F., Boatman, J. A., Boyle, P. A., Davies, G., de Leeuw, C., Eklund, N., Evans, D. S., Ferhmann, R., Fischer, K., Gieger, C., Gjessing, H. K., Hägg, S., Harris, J. R., Hayward, C., Holzapfel, C., Ibrahim-Verbaas, C. A., Ingelsson, E., Jacobsson, B., Joshi, P. K., Jugessur, A., Kaakinen, M., Kanoni, S., Karjalainen, J., Kolcic, I., Kristiansson, K., Kutalik, Z., Lahti, J., Lee, S. H., Lin, P., Lind, P. A., Liu, Y., Lohman, K., Loitfelder, M., McMahon, G., Vidal, P. M., Meirelles, O., Milani, L., Myhre, R., Nuotio, M.-L., Oldmeadow, C. J., Petrovic, K. E., Peyrot, W. J., Polasek, O., Quaye, L., Reinmaa, E., Rice, J. P., Rizzi, T. S., Schmidt, H., Schmidt, R., Smith, A. V., Smith, J. A., Tanaka, T., Terracciano, A., van der Loos, M. J. H. M., Vitart, V., Völzke, H., Wellmann, J., Yu, L., Zhao, W., Allik, J., Attia, J. R., Bandinelli, S., Bastardot, F., Beauchamp, J., Bennett, D. A., Berger, K., Bierut, L. J., Boomsma, D. I., Bültmann, U., Campbell, H., Chabris, C. F., Cherkas, L., Chung, M. K., Cucca, F., de Andrade, M., De Jager, P. L., De Neve, J.-E., Deary, I. J., Dedoussis, G. V., Deloukas, P., Dimitriou, M., Eiríksdóttir, G., Elderson, M. F., Eriksson, J. G., Evans, D. M., Faul, J. D., Ferrucci, L., Garcia, M. E., Grönberg, H., Guðnason, V., Hall, P., Harris, J. M., Harris, T. B., Hastie, N. D., Heath, A. C., Hernandez, D. G., Hoffmann, W., Hofman, A., Holle, R., Holliday, E. G., Hottenga, J.-J., Iacono, W. G., Illig, T., Järvelin, M.-R., Kähönen, M., Kaprio, J., Kirkpatrick, R. M., Kowgier, M., Latvala, A., Launer, L. J., Lawlor, D. A., Lehtimäki, T., Li, J., Lichtenstein, P., Lichtner, P., Liewald, D. C., Madden, P. A., Magnusson, P. K. E., Mäkinen, T. E., Masala, M., McGue, M., Metspalu, A., Mielck, A., Miller, M. B., Montgomery, G. W., Mukherjee, S., Nyholt, D. R., Oostra, B. A., Palmer, L. J., Palotie, A., Penninx, B. W. J. H., Perola, M., Peyser, P. A., Preisig, M., Räikkönen, K., Raitakari, O. T., Realo, A., Ring, S. M., Ripatti, S., Rivadeneira, F., Rudan, I., Rustichini, A., Salomaa, V., Sarin, A.-P., Schlessinger, D., Scott, R. J., Snieder, H., St Pourcain, B., Starr, J. M., Sul, J. H., Surakka, I., Svento, R., Teumer, A., Tiemeier, H., van Rooij, F. J. A., Van Wagoner, D. R., Vartiainen, E., Viikari, J., Vollenweider, P., Vonk, J. M., Waeber, G., Weir, D. R., Wichmann, H.-E., Widen, E., Willemsen, G., Wilson, J. F., Wright, A. F., Conley, D., Davey-Smith, G., Franke, L., Groenen, P. J. F., Hofman, A., Johannesson, M., Kardia, S. L. R., Krueger, R. F., Laibson, D., Martin, N. G., Meyer, M. N., Posthuma, D., Thurik, A. R., Timpson, N. J., Uitterlinden, A. G., van Duijn, C. M., Visscher, P. M., Benjamin, D. J., Cesarini, D., Koellinger, P. D., & Study LifeLines Cohort (2013). GWAS of 126,559 individuals identifies genetic variants associated with educational attainment. Science, 340(6139), 1467-1471. doi:10.1126/science.1235488.

    Abstract

    A genome-wide association study (GWAS) of educational attainment was conducted in a discovery sample of 101,069 individuals and a replication sample of 25,490. Three independent single-nucleotide polymorphisms (SNPs) are genome-wide significant (rs9320913, rs11584700, rs4851266), and all three replicate. Estimated effects sizes are small (coefficient of determination R(2) ≈ 0.02%), approximately 1 month of schooling per allele. A linear polygenic score from all measured SNPs accounts for ≈2% of the variance in both educational attainment and cognitive function. Genes in the region of the loci have previously been associated with health, cognitive, and central nervous system phenotypes, and bioinformatics analyses suggest the involvement of the anterior caudate nucleus. These findings provide promising candidate SNPs for follow-up work, and our effect size estimates can anchor power analyses in social-science genetics.

    Additional information

    Rietveld.SM.revision.2.pdf
  • Roberts, S. G. (2013). [Review of the book The Language of Gaming by A. Ensslin]. Discourse & Society, 24(5), 651-653. doi:10.1177/0957926513487819a.
  • Roberts, S. G., & Winters, J. (2013). Linguistic diversity and traffic accidents: Lessons from statistical studies of cultural traits. PLoS One, 8(8): e70902. doi:doi:10.1371/journal.pone.0070902.

    Abstract

    The recent proliferation of digital databases of cultural and linguistic data, together with new statistical techniques becoming available has lead to a rise in so-called nomothetic studies [1]–[8]. These seek relationships between demographic variables and cultural traits from large, cross-cultural datasets. The insights from these studies are important for understanding how cultural traits evolve. While these studies are fascinating and are good at generating testable hypotheses, they may underestimate the probability of finding spurious correlations between cultural traits. Here we show that this kind of approach can find links between such unlikely cultural traits as traffic accidents, levels of extra-martial sex, political collectivism and linguistic diversity. This suggests that spurious correlations, due to historical descent, geographic diffusion or increased noise-to-signal ratios in large datasets, are much more likely than some studies admit. We suggest some criteria for the evaluation of nomothetic studies and some practical solutions to the problems. Since some of these studies are receiving media attention without a widespread understanding of the complexities of the issue, there is a risk that poorly controlled studies could affect policy. We hope to contribute towards a general skepticism for correlational studies by demonstrating the ease of finding apparently rigorous correlations between cultural traits. Despite this, we see well-controlled nomothetic studies as useful tools for the development of theories.
  • Roberts, L. (2013). Processing of gender and number agreement in late Spanish bilinguals: A commentary on Sagarra and Herschensohn. International Journal of Bilingualism, 17(5), 628-633. doi:10.1177/1367006911435693.

    Abstract

    Sagarra and Herschensohn’s article examines English L2 learners’ knowledge of Spanish gender and number agreement and their sensitivity to gender and number agreement violations (e.g. *El ingeniero presenta el prototipo *famosa/*famosos en la conferencia) during real-time sentence processing. It raises some interesting questions that are central to both acquisition and processing research. In the following paper, I discuss a selection of these topics, for instance, what types of knowledge may or may not be available/accessible during real-time L2 processing at different proficiency levels, what the differences may be between the processing of number versus gender concord, and perhaps most importantly, the problem of how to characterize the relationship between the grammar and the parser, both in general terms and in the context of language acquisition.
  • Roberts, L., Matsuo, A., & Duffield, N. (2013). Processing VP-ellipsis and VP-anaphora with structurally parallel and nonparallel antecedents: An eyetracking study. Language and Cognitive Processes, 28, 29-47. doi:10.1080/01690965.2012.676190.

    Abstract

    In this paper, we report on an eye-tracking study investigating the processing of English VP-ellipsis (John took the rubbish out. Fred did [] too) (VPE) and VP-anaphora (John took the rubbish out. Fred did it too) (VPA) constructions, with syntactically parallel versus nonparallel antecedent clauses (e.g., The rubbish was taken out by John. Fred did [] too/Fred did it too). The results show first that VPE involves greater processing costs than VPA overall. Second, although the structural nonparallelism of the antecedent clause elicited a processing cost for both anaphor types, there was a difference in the timing and the strength of this parallelism effect: it was earlier and more fleeting for VPA, as evidenced by regression path times, whereas the effect occurred later with VPE completions, showing up in second and total fixation times measures, and continuing on into the reading of the adjacent text. Taking the observed differences between the processing of the two anaphor types together with other research findings in the literature, we argue that our data support the idea that in the case of VPE, the VP from the antecedent clause necessitates more computation at the elision site before it is linked to its antecedent than is the case for VPA.

    Files private

    Request files
  • Roe, J. M., Vidal-Piñeiro, D., Amlien, I. K., Pan, M., Sneve, M. H., Thiebaut de Schotten, M., Friedrich, P., Sha, Z., Francks, C., Eilertsen, E. M., Wang, Y., Walhovd, K. B., Fjell, A. M., & Westerhausen, R. (2023). Tracing the development and lifespan change of population-level structural asymmetry in the cerebral cortex. eLife, 12: e84685. doi:10.7554/eLife.84685.

    Abstract

    Cortical asymmetry is a ubiquitous feature of brain organization that is altered in neurodevelopmental disorders and aging. Achieving consensus on cortical asymmetries in humans is necessary to uncover the genetic-developmental mechanisms that shape them and factors moderating cortical lateralization. Here, we delineate population-level asymmetry in cortical thickness and surface area vertex-wise in 7 datasets and chart asymmetry trajectories across life (4-89 years; observations = 3937; 70% longitudinal). We reveal asymmetry interrelationships, heritability, and test associations in UK Biobank (N=∼37,500). Cortical asymmetry was robust across datasets. Whereas areal asymmetry is predominantly stable across life, thickness asymmetry grows in development and declines in aging. Areal asymmetry correlates in specific regions, whereas thickness asymmetry is globally interrelated across cortex and suggests high directional variability in global thickness lateralization. Areal asymmetry is moderately heritable (max h2SNP ∼19%), and phenotypic correlations are reflected by high genetic correlations, whereas heritability of thickness asymmetry is low. Finally, we detected an asymmetry association with cognition and confirm recently-reported handedness links. Results suggest areal asymmetry is developmentally stable and arises in early life, whereas developmental changes in thickness asymmetry may lead to directional variability of global thickness lateralization. Our results bear enough reproducibility to serve as a standard for future brain asymmetry studies.

    Additional information

    link to preprint supplementary files
  • Roelofs, A., & Piai, V. (2013). Associative facilitation in the Stroop task: Comment on Mahon et al. Cortex, 49, 1767-1769. doi:10.1016/j.cortex.2013.03.001.

    Abstract

    First paragraph: A fundamental issue in psycholinguistics concerns how speakers retrieve intended words from long-term memory. According to a selection by competition account (e.g., Levelt
    et al., 1999), conceptually driven word retrieval involves the activation of a set of candidate words and a competitive selection
    of the intended word from this set.
  • Roelofs, A., Piai, V., & Schriefers, H. (2013). Context effects and selective attention in picture naming and word reading: Competition versus response exclusion. Language and Cognitive Processes, 28, 655-671. doi:10.1080/01690965.2011.615663.

    Abstract

    For several decades, context effects in picture naming and word reading have been extensively investigated. However, researchers have found no agreement on the explanation of the effects. Whereas it has long been assumed that several types of effect reflect competition in word selection, recently it has been argued that these effects reflect the exclusion of articulatory responses from an output buffer. Here, we first critically evaluate the findings on context effects in picture naming that have been taken as evidence against the competition account, and we argue that the findings are, in fact, compatible with the competition account. Moreover, some of the findings appear to challenge rather than support the response exclusion account. Next, we compare the response exclusion and competition accounts with respect to their ability to explain data on word reading. It appears that response exclusion does not account well for context effects on word reading times, whereas computer simulations reveal that a competition model like WEAVER++ accounts for the findings.

    Files private

    Request files
  • Roelofs, A., Dijkstra, T., & Gerakaki, S. (2013). Modeling of word translation: Activation flow from concepts to lexical items. Bilingualism: Language and Cognition, 16, 343-353. doi:10.1017/S1366728912000612.

    Abstract

    Whereas most theoretical and computational models assume a continuous flow of activation from concepts to lexical items in spoken word production, one prominent model assumes that the mapping of concepts onto words happens in a discrete fashion (Bloem & La Heij, 2003). Semantic facilitation of context pictures on word translation has been taken to support the discrete-flow model. Here, we report results of computer simulations with the continuous-flow WEAVER++ model (Roelofs, 1992, 2006) demonstrating that the empirical observation taken to be in favor of discrete models is, in fact, only consistent with those models and equally compatible with more continuous models of word production by monolingual and bilingual speakers. Continuous models are specifically and independently supported by other empirical evidence on the effect of context pictures on native word production.
  • Roelofs, A., Piai, V., & Schriefers, H. (2013). Selection by competition in word production: Rejoinder to Janssen (2012). Language and Cognitive Processes, 28, 679-683. doi:10.1080/01690965.2013.770890.

    Abstract

    Roelofs, Piai, and Schriefers argue that several findings on the effect of distractor words and pictures in producing words support a selection-by-competition account and challenge a non-competitive response-exclusion account. Janssen argues that the findings do not challenge response exclusion, and he conjectures that both competitive and non-competitive mechanisms underlie word selection. Here, we maintain that the findings do challenge the response-exclusion account and support the assumption of a single competitive mechanism underlying word selection.

    Files private

    Request files
  • Rommers, J., Meyer, A. S., & Huettig, F. (2013). Object shape and orientation do not routinely influence performance during language processing. Psychological Science, 24, 2218-2225. doi:10.1177/0956797613490746.

    Abstract

    The role of visual representations during language processing remains unclear: They could be activated as a necessary part of the comprehension process, or they could be less crucial and influence performance in a task-dependent manner. In the present experiments, participants read sentences about an object. The sentences implied that the object had a specific shape or orientation. They then either named a picture of that object (Experiments 1 and 3) or decided whether the object had been mentioned in the sentence (Experiment 2). Orientation information did not reliably influence performance in any of the experiments. Shape representations influenced performance most strongly when participants were asked to compare a sentence with a picture or when they were explicitly asked to use mental imagery while reading the sentences. Thus, in contrast to previous claims, implied visual information often does not contribute substantially to the comprehension process during normal reading.

    Additional information

    DS_10.1177_0956797613490746.pdf
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2013). The contents of predictions in sentence comprehension: Activation of the shape of objects before they are referred to. Neuropsychologia, 51(3), 437-447. doi:10.1016/j.neuropsychologia.2012.12.002.

    Abstract

    When comprehending concrete words, listeners and readers can activate specific visual information such as the shape of the words’ referents. In two experiments we examined whether such information can be activated in an anticipatory fashion. In Experiment 1, listeners’ eye movements were tracked while they were listening to sentences that were predictive of a specific critical word (e.g., “moon” in “In 1969 Neil Armstrong was the first man to set foot on the moon”). 500 ms before the acoustic onset of the critical word, participants were shown four-object displays featuring three unrelated distractor objects and a critical object, which was either the target object (e.g., moon), an object with a similar shape (e.g., tomato), or an unrelated control object (e.g., rice). In a time window before shape information from the spoken target word could be retrieved, participants already tended to fixate both the target and the shape competitors more often than they fixated the control objects, indicating that they had anticipatorily activated the shape of the upcoming word's referent. This was confirmed in Experiment 2, which was an ERP experiment without picture displays. Participants listened to the same lead-in sentences as in Experiment 1. The sentence-final words corresponded to the predictable target, the shape competitor, or the unrelated control object (yielding, for instance, “In 1969 Neil Armstrong was the first man to set foot on the moon/tomato/rice”). N400 amplitude in response to the final words was significantly attenuated in the shape-related compared to the unrelated condition. Taken together, these results suggest that listeners can activate perceptual attributes of objects before they are referred to in an utterance.
  • Rommers, J., Dijkstra, T., & Bastiaansen, M. C. M. (2013). Context-dependent semantic processing in the human brain: Evidence from idiom comprehension. Journal of Cognitive Neuroscience, 25(5), 762-776. doi:10.1162/jocn_a_00337.

    Abstract

    Language comprehension involves activating word meanings and integrating them with the sentence context. This study examined whether these routines are carried out even when they are theoretically unnecessary, namely in the case of opaque idiomatic expressions, for which the literal word meanings are unrelated to the overall meaning of the expression. Predictable words in sentences were replaced by a semantically related or unrelated word. In literal sentences, this yielded previously established behavioral and electrophysiological signatures of semantic processing: semantic facilitation in lexical decision, a reduced N400 for semantically related relative to unrelated words, and a power increase in the gamma frequency band that was disrupted by semantic violations. However, the same manipulations in idioms yielded none of these effects. Instead, semantic violations elicited a late positivity in idioms. Moreover, gamma band power was lower in correct idioms than in correct literal sentences. It is argued that the brain's semantic expectancy and literal word meaning integration operations can, to some extent, be “switched off” when the context renders them unnecessary. Furthermore, the results lend support to models of idiom comprehension that involve unitary idiom representations.
  • Rösler, D., & Skiba, R. (1986). Ein vernetzter Lehrmaterial-Steinbruch für Deutsch als Zweitsprache (Projekt EKMAUS, FU Berlin). Deutsch Lernen: Zeitschrift für den Sprachunterricht mit ausländischen Arbeitnehmern, 2, 68-71. Retrieved from http://www.daz-didaktik.de/html/1986.html.
  • Rossano, F., Carpenter, M., & Tomasello, M. (2013). One-year-old infants follow others’ voice direction. Psychological Science, 23, 1298-1302. doi:10.1177/0956797612450032.

    Abstract

    We investigated 1-year-old infants’ ability to infer an adult’s focus of attention solely on the basis of her voice direction. In Studies 1 and 2, 12- and 16-month-olds watched an adult go behind a barrier and then heard her verbally express excitement about a toy hidden in one of two boxes at either end of the barrier. Even though they could not see the adult, infants of both ages followed her voice direction to the box containing the toy. Study 2 showed that infants could do this even when the adult was positioned closer to the incorrect box while she vocalized toward the correct one (and thus ruled out the possibility that infants were merely approaching the source of the sound). In Study 3, using the same methods as in Study 2, we found that chimpanzees performed the task at chance level. Our results show that infants can determine the focus of another person’s attention through auditory information alone—a useful skill for establishing joint attention.

    Additional information

    Rossano_Suppl_Mat.pdf
  • Rossi, E., Pereira Soares, S. M., Prystauka, Y., Nakamura, M., & Rothman, J. (2023). Riding the (brain) waves! Using neural oscillations to inform bilingualism research. Bilingualism: Language and Cognition, 26(1), 202-215. doi:10.1017/S1366728922000451.

    Abstract

    The study of the brains’ oscillatory activity has been a standard technique to gain insights into human neurocognition for a relatively long time. However, as a complementary analysis to ERPs, only very recently has it been utilized to study bilingualism and its neural underpinnings. Here, we provide a theoretical and methodological starter for scientists in the (psycho)linguistics and neurocognition of bilingualism field(s) to understand the bases and applications of this analytical tool. Towards this goal, we provide a description of the characteristics of the human neural (and its oscillatory) signal, followed by an in-depth description of various types of EEG oscillatory analyses, supplemented by figures and relevant examples. We then utilize the scant, yet emergent, literature on neural oscillations and bilingualism to highlight the potential of how analyzing neural oscillations can advance our understanding of the (psycho)linguistic and neurocognitive understanding of bilingualism.
  • Rossi, G., Dingemanse, M., Floyd, S., Baranova, J., Blythe, J., Kendrick, K. H., Zinken, J., & Enfield, N. J. (2023). Shared cross-cultural principles underlie human prosocial behavior at the smallest scale. Scientific Reports, 13: 6057. doi:10.1038/s41598-023-30580-5.

    Abstract

    Prosociality and cooperation are key to what makes us human. But different cultural norms can shape our evolved capacities for interaction, leading to differences in social relations. How people share resources has been found to vary across cultures, particularly when stakes are high and when interactions are anonymous. Here we examine prosocial behavior among familiars (both kin and non-kin) in eight cultures on five continents, using video recordings of spontaneous requests for immediate, low-cost assistance (e.g., to pass a utensil). We find that, at the smallest scale of human interaction, prosocial behavior follows cross-culturally shared principles: requests for assistance are very frequent and mostly successful; and when people decline to give help, they normally give a reason. Although there are differences in the rates at which such requests are ignored, or require verbal acceptance, cultural variation is limited, pointing to a common foundation for everyday cooperation around the world.
  • Rubio-Fernández, P. (2013). Associative and inferential processes in pragmatic enrichment: The case of emergent properties. Language and Cognitive Processes, 28(6), 723-745. doi:10.1080/01690965.2012.659264.

    Abstract

    Experimental research on word processing has generally focused on properties that are associated to a concept in long-term memory (e.g., basketball—round). The present study addresses a related issue: the accessibility of “emergent properties” or conceptual properties that have to be inferred in a given context (e.g., basketball—floats). This investigation sheds light on a current debate in cognitive pragmatics about the number of pragmatic systems that are there (Carston, 2002a, 2007; Recanati, 2004, 2007). Two experiments using a self-paced reading task suggest that inferential processes are fully integrated in the processing system. Emergent properties are accessed early on in processing, without delaying later discourse integration processes. I conclude that the theoretical distinction between explicit and implicit meaning is not paralleled by that between associative and inferential processes.
  • Rubio-Fernández, P. (2013). Perspective tracking in progress: Do not disturb. Cognition, 129(2), 264-272. doi:10.1016/j.cognition.2013.07.005.

    Abstract

    Two experiments tested the hypothesis that indirect false-belief tests allow participants to track a protagonist’s perspective uninterruptedly, whereas direct false-belief tests disrupt the process of perspective tracking in various ways. For this purpose, adults’ performance was compared on indirect and direct false-belief tests by means of continuous eye-tracking. Experiment 1 confirmed that the false-belief question used in direct tests disrupts perspective tracking relative to what is observed in an indirect test. Experiment 2 confirmed that perspective tracking is a continuous process that can be easily disrupted in adults by a subtle visual manipulation in both indirect and direct tests. These results call for a closer analysis of the demands of the false-belief tasks that have been used in developmental research.
  • Rubio-Fernández, P., & Geurts, B. (2013). How to pass the false-belief task before your fourth birthday. Psychological Science, 24(1), 27-33. doi:10.1177/0956797612447819.

    Abstract

    The experimental record of the last three decades shows that children under 4 years old fail all sorts of variations on the standard false-belief task, whereas more recent studies have revealed that infants are able to pass nonverbal versions of the task. We argue that these paradoxical results are an artifact of the type of false-belief tasks that have been used to test infants and children: Nonverbal designs allow infants to keep track of a protagonist’s perspective over a course of events, whereas verbal designs tend to disrupt the perspective-tracking process in various ways, which makes it too hard for younger children to demonstrate their capacity for perspective tracking. We report three experiments that confirm this hypothesis by showing that 3-year-olds can pass a suitably streamlined version of the verbal false-belief task. We conclude that young children can pass the verbal false-belief task provided that they are allowed to keep track of the protagonist’s perspective without too much disruption.
  • Rutz, C., Bronstein, M., Raskin, A., Vernes, S. C., Zacarian, K., & Blasi, D. E. (2023). Using machine learning to decode animal communication. Science, 381(6654), 152-155. doi:10.1126/science.adg7314.

    Abstract

    The past few years have seen a surge of interest in using machine learning (ML) methods for studying the behavior of nonhuman animals (hereafter “animals”) (1). A topic that has attracted particular attention is the decoding of animal communication systems using deep learning and other approaches (2). Now is the time to tackle challenges concerning data availability, model validation, and research ethics, and to embrace opportunities for building collaborations across disciplines and initiatives.
  • Ryskin, R., & Nieuwland, M. S. (2023). Prediction during language comprehension: What is next? Trends in Cognitive Sciences, 27(11), 1032-1052. doi:10.1016/j.tics.2023.08.003.

    Abstract

    Prediction is often regarded as an integral aspect of incremental language comprehension, but little is known about the cognitive architectures and mechanisms that support it. We review studies showing that listeners and readers use all manner of contextual information to generate multifaceted predictions about upcoming input. The nature of these predictions may vary between individuals owing to differences in language experience, among other factors. We then turn to unresolved questions which may guide the search for the underlying mechanisms. (i) Is prediction essential to language processing or an optional strategy? (ii) Are predictions generated from within the language system or by domain-general processes? (iii) What is the relationship between prediction and memory? (iv) Does prediction in comprehension require simulation via the production system? We discuss promising directions for making progress in answering these questions and for developing a mechanistic understanding of prediction in language.
  • Sadakata, M., & McQueen, J. M. (2013). High stimulus variability in nonnative speech learning supports formation of abstract categories: Evidence from Japanese geminates. Journal of the Acoustical Society of America, 134(2), 1324-1335. doi:10.1121/1.4812767.

    Abstract

    This study reports effects of a high-variability training procedure on nonnative learning of a Japanese geminate-singleton fricative contrast. Thirty native speakers of Dutch took part in a 5-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. Participants were trained with either many repetitions of a limited set of words recorded by a single speaker (low-variability training) or with fewer repetitions of a more variable set of words recorded by multiple speakers (high-variability training). Both types of training enhanced identification of speech but not of nonspeech materials, indicating that learning was domain specific. High-variability training led to superior performance in identification but not in discrimination tests, and supported better generalization of learning as shown by transfer from the trained fricatives to the identification of untrained stops and affricates. Variability thus helps nonnative listeners to form abstract categories rather than to enhance early acoustic analysis.
  • Sajovic, J., Meglič, A., Corradi, Z., Khan, M., Maver, A., Vidmar, M. J., Hawlina, M., Cremers, F. P. M., & Fakin, A. (2023). ABCA4Variant c.5714+5G> A in trans with null alleles results in primary RPE damage. Investigative Opthalmology & Visual Science, 64(12): 33. doi:10.1167/iovs.64.12.33.

    Abstract

    Purpose: To determine the disease pathogenesis associated with the frequent ABCA4 variant c.5714+5G>A (p.[=,Glu1863Leufs*33]).

    Methods: Patient-derived photoreceptor precursor cells were generated to analyze the effect of c.5714+5G>A on splicing and perform a quantitative analysis of c.5714+5G>A products. Patients with c.5714+5G>A in trans with a null allele (i.e., c.5714+5G>A patients; n = 7) were compared with patients with two null alleles (i.e., double null patients; n = 11); with a special attention to the degree of RPE atrophy (area of definitely decreased autofluorescence and the degree of photoreceptor impairment (outer nuclear layer thickness and pattern electroretinography amplitude).

    Results: RT-PCR of mRNA from patient-derived photoreceptor precursor cells showed exon 40 and exon 39/40 deletion products, as well as the normal transcript. Quantification of products showed 52.4% normal and 47.6% mutant ABCA4 mRNA. Clinically, c.5714+5G>A patients displayed significantly better structural and functional preservation of photoreceptors (thicker outer nuclear layer, presence of tubulations, higher pattern electroretinography amplitude) than double null patients with similar degrees of RPE loss, whereas double null patients exhibited signs of extensive photoreceptor ,damage even in the areas with preserved RPE.

    Conclusions: The prototypical STGD1 sequence of events of primary RPE and secondary photoreceptor damage is congruous with c.5714+5G>A, but not the double null genotype, which implies different and genotype-dependent disease mechanisms. We hypothesize that the relative photoreceptor sparing in c.5714+5G>A patients results from the remaining function of the ABCA4 transporter originating from the normally spliced product, possibly by decreasing the direct bisretinoid toxicity on photoreceptor membranes.
  • Sakkalou, E., Ellis-Davies, K., Fowler, N., Hilbrink, E., & Gattis, M. (2013). Infants show stability of goal-directed imitation. Journal of Experimental Child Psychology, 114, 1-9. doi:10.1016/j.jecp.2012.09.005.

    Abstract

    Previous studies have reported that infants selectively reproduce observed actions and have argued that this selectivity reflects understanding of intentions and goals, or goal-directed imitation. We reasoned that if selective imitation of goal-directed actions reflects understanding of intentions, infants should demonstrate stability across perceptually and causally dissimilar imitation tasks. To this end, we employed a longitudinal within-participants design to compare the performance of 37 infants on two imitation tasks, with one administered at 13 months and one administered at 14 months. Infants who selectively imitated goal-directed actions in an object-cued task at 13 months also selectively imitated goal-directed actions in a vocal-cued task at 14 months. We conclude that goal-directed imitation reflects a general ability to interpret behavior in terms of mental states.
  • Salomo, D., & Liszkowski, U. (2013). Sociocultural settings influence the emergence of prelinguistic deictic gestures. Child development, 84(4), 1296-1307. doi:10.1111/cdev.12026.

    Abstract

    Daily activities of forty-eight 8- to 15-month-olds and their interlocutors were observed to test for the presence and frequency of triadic joint actions and deictic gestures across three different cultures: Yucatec-Mayans (Mexico), Dutch (Netherlands), and Shanghai-Chinese (China). The amount of joint action and deictic gestures to which infants were exposed differed systematically across settings, allowing testing for the role of social–interactional input in the ontogeny of prelinguistic gestures. Infants gestured more and at an earlier age depending on the amount of joint action and gestures infants were exposed to, revealing early prelinguistic sociocultural differences. The study shows that the emergence of basic prelinguistic gestures is socially mediated, suggesting that others' actions structure the ontogeny of human communication from early on.
  • Sampaio, C., & Konopka, A. E. (2013). Memory for non-native language: The role of lexical processing in the retention of surface form. Memory, 21, 537-544. doi:10.1080/09658211.2012.746371.

    Abstract

    Research on memory for native language (L1) has consistently shown that retention of surface form is inferior to that of gist (e.g., Sachs, 1967). This paper investigates whether the same pattern is found in memory for non-native language (L2). We apply a model of bilingual word processing to more complex linguistic structures and predict that memory for L2 sentences ought to contain more surface information than L1 sentences. Native and non-native speakers of English were tested on a set of sentence pairs with different surface forms but the same meaning (e.g., “The bullet hit/struck the bull's eye”). Memory for these sentences was assessed with a cued recall procedure. Responses showed that native and non-native speakers did not differ in the accuracy of gist-based recall but that non-native speakers outperformed native speakers in the retention of surface form. The results suggest that L2 processing involves more intensive encoding of lexical level information than L1 processing.

    Files private

    Request files
  • Sauter, D. A., & Eisner, F. (2013). Commonalities outweigh differences in the communication of emotions across human cultures [Letter]. Proceedings of the National Academy of Sciences of the United States of America, 110, E180. doi:10.1073/pnas.1209522110.
  • Schapper, A., & Hammarström, H. (2013). Innovative numerals in Malayo-Polynesian languages outside of Oceania. Oceanic Linguistics, 52, 423-455.

Share this page