Publications

Displaying 301 - 400 of 430
  • Pouw, W., Wassenburg, S. I., Hostetter, A. B., De Koning, B. B., & Paas, F. (2020). Does gesture strengthen sensorimotor knowledge of objects? The case of the size-weight illusion. Psychological Research, 84(4), 966-980. doi:10.1007/s00426-018-1128-y.

    Abstract

    Co-speech gestures have been proposed to strengthen sensorimotor knowledge related to objects’ weight and manipulability.
    This pre-registered study (https ://www.osf.io/9uh6q /) was designed to explore how gestures affect memory for sensorimotor
    information through the application of the visual-haptic size-weight illusion (i.e., objects weigh the same, but are experienced
    as different in weight). With this paradigm, a discrepancy can be induced between participants’ conscious illusory
    perception of objects’ weight and their implicit sensorimotor knowledge (i.e., veridical motor coordination). Depending on
    whether gestures reflect and strengthen either of these types of knowledge, gestures may respectively decrease or increase
    the magnitude of the size-weight illusion. Participants (N = 159) practiced a problem-solving task with small and large
    objects that were designed to induce a size-weight illusion, and then explained the task with or without co-speech gesture
    or completed a control task. Afterwards, participants judged the heaviness of objects from memory and then while holding
    them. Confirmatory analyses revealed an inverted size-weight illusion based on heaviness judgments from memory and we
    found gesturing did not affect judgments. However, exploratory analyses showed reliable correlations between participants’
    heaviness judgments from memory and (a) the number of gestures produced that simulated actions, and (b) the kinematics of
    the lifting phases of those gestures. These findings suggest that gestures emerge as sensorimotor imaginings that are governed
    by the agent’s conscious renderings about the actions they describe, rather than implicit motor routines.
  • Pouw, W., Harrison, S. J., Esteve-Gibert, N., & Dixon, J. A. (2020). Energy flows in gesture-speech physics: The respiratory-vocal system and its coupling with hand gestures. The Journal of the Acoustical Society of America, 148(3): 1231. doi:10.1121/10.0001730.

    Abstract

    Expressive moments in communicative hand gestures often align with emphatic stress in speech. It has recently been found that acoustic markers of emphatic stress arise naturally during steady-state phonation when upper-limb movements impart physical impulses on the body, most likely affecting acoustics via respiratory activity. In this confirmatory study, participants (N = 29) repeatedly uttered consonant-vowel (/pa/) mono-syllables while moving in particular phase relations with speech, or not moving the upper limbs. This study shows that respiration-related activity is affected by (especially high-impulse) gesturing when vocalizations occur near peaks in physical impulse. This study further shows that gesture-induced moments of bodily impulses increase the amplitude envelope of speech, while not similarly affecting the Fundamental Frequency (F0). Finally, tight relations between respiration-related activity and vocalization were observed, even in the absence of movement, but even more so when upper-limb movement is present. The current findings expand a developing line of research showing that speech is modulated by functional biomechanical linkages between hand gestures and the respiratory system. This identification of gesture-speech biomechanics promises to provide an alternative phylogenetic, ontogenetic, and mechanistic explanatory route of why communicative upper limb movements co-occur with speech in humans.
    ACKNOWLEDGMENTS

    Additional information

    Link to Preprint on OSF
  • Pouw, W., & Dixon, J. A. (2020). Gesture networks: Introducing dynamic time warping and network analysis for the kinematic study of gesture ensembles. Discourse Processes, 57(4), 301-319. doi:10.1080/0163853X.2019.1678967.

    Abstract

    We introduce applications of established methods in time-series and network
    analysis that we jointly apply here for the kinematic study of gesture
    ensembles. We define a gesture ensemble as the set of gestures produced
    during discourse by a single person or a group of persons. Here we are
    interested in how gestures kinematically relate to one another. We use
    a bivariate time-series analysis called dynamic time warping to assess how
    similar each gesture is to other gestures in the ensemble in terms of their
    velocity profiles (as well as studying multivariate cases with gesture velocity
    and speech amplitude envelope profiles). By relating each gesture event to
    all other gesture events produced in the ensemble, we obtain a weighted
    matrix that essentially represents a network of similarity relationships. We
    can therefore apply network analysis that can gauge, for example, how
    diverse or coherent certain gestures are with respect to the gesture ensemble.
    We believe these analyses promise to be of great value for gesture
    studies, as we can come to understand how low-level gesture features
    (kinematics of gesture) relate to the higher-order organizational structures
    present at the level of discourse.

    Additional information

    Open Data OSF
  • Pouw, W., Harrison, S. J., & Dixon, J. A. (2020). Gesture–speech physics: The biomechanical basis for the emergence of gesture–speech synchrony. Journal of Experimental Psychology: General, 149(2), 391-404. doi:10.1037/xge0000646.

    Abstract

    The phenomenon of gesture–speech synchrony involves tight coupling of prosodic contrasts in gesture
    movement (e.g., peak velocity) and speech (e.g., peaks in fundamental frequency; F0). Gesture–speech
    synchrony has been understood as completely governed by sophisticated neural-cognitive mechanisms.
    However, gesture–speech synchrony may have its original basis in the resonating forces that travel through the
    body. In the current preregistered study, movements with high physical impact affected phonation in line with
    gesture–speech synchrony as observed in natural contexts. Rhythmic beating of the arms entrained phonation
    acoustics (F0 and the amplitude envelope). Such effects were absent for a condition with low-impetus
    movements (wrist movements) and a condition without movement. Further, movement–phonation synchrony
    was more pronounced when participants were standing as opposed to sitting, indicating a mediating role for
    postural stability. We conclude that gesture–speech synchrony has a biomechanical basis, which will have
    implications for our cognitive, ontogenetic, and phylogenetic understanding of multimodal language.
  • Pouw, W., Trujillo, J. P., & Dixon, J. A. (2020). The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking. Behavior Research Methods, 52, 723-740. doi:10.3758/s13428-019-01271-9.

    Abstract

    There is increasing evidence that hand gestures and speech synchronize their activity on multiple dimensions and timescales. For example, gesture’s kinematic peaks (e.g., maximum speed) are coupled with prosodic markers in speech. Such coupling operates on very short timescales at the level of syllables (200 ms), and therefore requires high-resolution measurement of gesture kinematics and speech acoustics. High-resolution speech analysis is common for gesture studies, given that field’s classic ties with (psycho)linguistics. However, the field has lagged behind in the objective study of gesture kinematics (e.g., as compared to research on instrumental action). Often kinematic peaks in gesture are measured by eye, where a “moment of maximum effort” is determined by several raters. In the present article, we provide a tutorial on more efficient methods to quantify the temporal properties of gesture kinematics, in which we focus on common challenges and possible solutions that come with the complexities of studying multimodal language. We further introduce and compare, using an actual gesture dataset (392 gesture events), the performance of two video-based motion-tracking methods (deep learning vs. pixel change) against a high-performance wired motion-tracking system (Polhemus Liberty). We show that the videography methods perform well in the temporal estimation of kinematic peaks, and thus provide a cheap alternative to expensive motion-tracking systems. We hope that the present article incites gesture researchers to embark on the widespread objective study of gesture kinematics and their relation to speech.
  • Preisig, B., Sjerps, M. J., Hervais-Adelman, A., Kösem, A., Hagoort, P., & Riecke, L. (2020). Bilateral gamma/delta transcranial alternating current stimulation affects interhemispheric speech sound integration. Journal of Cognitive Neuroscience, 32(7), 1242-1250. doi:10.1162/jocn_a_01498.

    Abstract

    Perceiving speech requires the integration of different speech cues, that is, formants. When the speech signal is split so that different cues are presented to the right and left ear (dichotic listening), comprehension requires the integration of binaural information. Based on prior electrophysiological evidence, we hypothesized that the integration of dichotically presented speech cues is enabled by interhemispheric phase synchronization between primary and secondary auditory cortex in the gamma frequency band. We tested this hypothesis by applying transcranial alternating current stimulation (TACS) bilaterally above the superior temporal lobe to induce or disrupt interhemispheric gamma-phase coupling. In contrast to initial predictions, we found that gamma TACS applied in-phase above the two hemispheres (interhemispheric lag 0°) perturbs interhemispheric integration of speech cues, possibly because the applied stimulation perturbs an inherent phase lag between the left and right auditory cortex. We also observed this disruptive effect when applying antiphasic delta TACS (interhemispheric lag 180°). We conclude that interhemispheric phase coupling plays a functional role in interhemispheric speech integration. The direction of this effect may depend on the stimulation frequency.
  • Rasenberg, M., Ozyurek, A., & Dingemanse, M. (2020). Alignment in multimodal interaction: An integrative framework. Cognitive Science, 44(11): e12911. doi:10.1111/cogs.12911.

    Abstract

    When people are engaged in social interaction, they can repeat aspects of each other’s communicative behavior, such as words or gestures. This kind of behavioral alignment has been studied across a wide range of disciplines and has been accounted for by diverging theories. In this paper, we review various operationalizations of lexical and gestural alignment. We reveal that scholars have fundamentally different takes on when and how behavior is considered to be aligned, which makes it difficult to compare findings and draw uniform conclusions. Furthermore, we show that scholars tend to focus on one particular dimension of alignment (traditionally, whether two instances of behavior overlap in form), while other dimensions remain understudied. This hampers theory testing and building, which requires a well‐defined account of the factors that are central to or might enhance alignment. To capture the complex nature of alignment, we identify five key dimensions to formalize the relationship between any pair of behavior: time, sequence, meaning, form, and modality. We show how assumptions regarding the underlying mechanism of alignment (placed along the continuum of priming vs. grounding) pattern together with operationalizations in terms of the five dimensions. This integrative framework can help researchers in the field of alignment and related phenomena (including behavior matching, mimicry, entrainment, and accommodation) to formulate their hypotheses and operationalizations in a more transparent and systematic manner. The framework also enables us to discover unexplored research avenues and derive new hypotheses regarding alignment.
  • Rasenberg, M., Rommers, J., & Van Bergen, G. (2020). Anticipating predictability: An ERP investigation of expectation-managing discourse markers in dialogue comprehension. Language, Cognition and Neuroscience, 35(1), 1-16. doi:10.1080/23273798.2019.1624789.

    Abstract

    n two ERP experiments, we investigated how the Dutch discourse markers eigenlijk “actually”, signalling expectation disconfirmation, and inderdaad “indeed”, signalling expectation confirmation, affect incremental dialogue comprehension. We investigated their effects on the processing of subsequent (un)predictable words, and on the quality of word representations in memory. Participants read dialogues with (un)predictable endings that followed a discourse marker (eigenlijk in Experiment 1, inderdaad in Experiment 2) or a control adverb. We found no strong evidence that discourse markers modulated online predictability effects elicited by subsequently read words. However, words following eigenlijk elicited an enhanced posterior post-N400 positivity compared with words following an adverb regardless of their predictability, potentially reflecting increased processing costs associated with pragmatically driven discourse updating. No effects of inderdaad were found on online processing, but inderdaad seemed to influence memory for (un)predictable dialogue endings. These findings nuance our understanding of how pragmatic markers affect incremental language comprehension.

    Additional information

    plcp_a_1624789_sm6686.docx
  • Ravignani, A., & Kotz, S. (2020). Breathing, voice and synchronized movement. Proceedings of the National Academy of Sciences of the United States of America, 117(38), 23223-23224. doi:10.1073/pnas.2011402117.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2020). The role of social network structure in the emergence of linguistic structure. Cognitive Science, 44(8): e12876. doi:10.1111/cogs.12876.

    Abstract

    Social network structure has been argued to shape the structure of languages, as well as affect the spread of innovations and the formation of conventions in the community. Specifically, theoretical and computational models of language change predict that sparsely connected communities develop more systematic languages, while tightly knit communities can maintain high levels of linguistic complexity and variability. However, the role of social network structure in the cultural evolution of languages has never been tested experimentally. Here, we present results from a behavioral group communication study, in which we examined the formation of new languages created in the lab by micro‐societies that varied in their network structure. We contrasted three types of social networks: fully connected, small‐world, and scale‐free. We examined the artificial languages created by these different networks with respect to their linguistic structure, communicative success, stability, and convergence. Results did not reveal any effect of network structure for any measure, with all languages becoming similarly more systematic, more accurate, more stable, and more shared over time. At the same time, small‐world networks showed the greatest variation in their convergence, stabilization, and emerging structure patterns, indicating that network structure can influence the community's susceptibility to random linguistic changes (i.e., drift).
  • Reis, A., Guerreiro, M., & Petersson, K. M. (2003). A sociodemographic and neuropsychological characterization of an illiterate population. Applied Neuropsychology, 10, 191-204. doi:10.1207/s15324826an1004_1.

    Abstract

    The objectives of this article are to characterize the performance and to discuss the performance differences between literate and illiterate participants in a well-defined study population.We describe the participant-selection procedure used to investigate this population. Three groups with similar sociocultural backgrounds living in a relatively homogeneous fishing community in southern Portugal were characterized in terms of socioeconomic and sociocultural background variables and compared on a simple neuropsychological test battery; specifically, a literate group with more than 4 years of education (n = 9), a literate group with 4 years of education (n = 26), and an illiterate group (n = 31) were included in this study.We compare and discuss our results with other similar studies on the effects of literacy and illiteracy. The results indicate that naming and identification of real objects, verbal fluency using ecologically relevant semantic criteria, verbal memory, and orientation are not affected by literacy or level of formal education. In contrast, verbal working memory assessed with digit span, verbal abstraction, long-term semantic memory, and calculation (i.e., multiplication) are significantly affected by the level of literacy. We indicate that it is possible, with proper participant-selection procedures, to exclude general cognitive impairment and to control important sociocultural factors that potentially could introduce bias when studying the specific effects of literacy and level of formal education on cognitive brain function.
  • Reis, A., & Petersson, K. M. (2003). Educational level, socioeconomic status and aphasia research: A comment on Connor et al. (2001)- Effect of socioeconomic status on aphasia severity and recovery. Brain and Language, 87, 449-452. doi:10.1016/S0093-934X(03)00140-8.

    Abstract

    Is there a relation between socioeconomic factors and aphasia severity and recovery? Connor, Obler, Tocco, Fitzpatrick, and Albert (2001) describe correlations between the educational level and socioeconomic status of aphasic subjects with aphasia severity and subsequent recovery. As stated in the introduction by Connor et al. (2001), studies of the influence of educational level and literacy (or illiteracy) on aphasia severity have yielded conflicting results, while no significant link between socioeconomic status and aphasia severity and recovery has been established. In this brief note, we will comment on their findings and conclusions, beginning first with a brief review of literacy and aphasia research, and complexities encountered in these fields of investigation. This serves as a general background to our specific comments on Connor et al. (2001), which will be focusing on methodological issues and the importance of taking normative values in consideration when subjects with different socio-cultural or socio-economic backgrounds are assessed.
  • Ripperda, J., Drijvers, L., & Holler, J. (2020). Speeding up the detection of non-iconic and iconic gestures (SPUDNIG): A toolkit for the automatic detection of hand movements and gestures in video data. Behavior Research Methods, 52(4), 1783-1794. doi:10.3758/s13428-020-01350-2.

    Abstract

    In human face-to-face communication, speech is frequently accompanied by visual signals, especially communicative hand gestures. Analyzing these visual signals requires detailed manual annotation of video data, which is often a labor-intensive and time-consuming process. To facilitate this process, we here present SPUDNIG (SPeeding Up the Detection of Non-iconic and Iconic Gestures), a tool to automatize the detection and annotation of hand movements in video data. We provide a detailed description of how SPUDNIG detects hand movement initiation and termination, as well as open-source code and a short tutorial on an easy-to-use graphical user interface (GUI) of our tool. We then provide a proof-of-principle and validation of our method by comparing SPUDNIG’s output to manual annotations of gestures by a human coder. While the tool does not entirely eliminate the need of a human coder (e.g., for false positives detection), our results demonstrate that SPUDNIG can detect both iconic and non-iconic gestures with very high accuracy, and could successfully detect all iconic gestures in our validation dataset. Importantly, SPUDNIG’s output can directly be imported into commonly used annotation tools such as ELAN and ANVIL. We therefore believe that SPUDNIG will be highly relevant for researchers studying multimodal communication due to its annotations significantly accelerating the analysis of large video corpora.

    Additional information

    data and materials
  • Rodd, J., Bosker, H. R., Ernestus, M., Alday, P. M., Meyer, A. S., & Ten Bosch, L. (2020). Control of speaking rate is achieved by switching between qualitatively distinct cognitive ‘gaits’: Evidence from simulation. Psychological Review, 127(2), 281-304. doi:10.1037/rev0000172.

    Abstract

    That speakers can vary their speaking rate is evident, but how they accomplish this has hardly been studied. Consider this analogy: When walking, speed can be continuously increased, within limits, but to speed up further, humans must run. Are there multiple qualitatively distinct speech “gaits” that resemble walking and running? Or is control achieved by continuous modulation of a single gait? This study investigates these possibilities through simulations of a new connectionist computational model of the cognitive process of speech production, EPONA, that borrows from Dell, Burger, and Svec’s (1997) model. The model has parameters that can be adjusted to fit the temporal characteristics of speech at different speaking rates. We trained the model on a corpus of disyllabic Dutch words produced at different speaking rates. During training, different clusters of parameter values (regimes) were identified for different speaking rates. In a 1-gait system, the regimes used to achieve fast and slow speech are qualitatively similar, but quantitatively different. In a multiple gait system, there is no linear relationship between the parameter settings associated with each gait, resulting in an abrupt shift in parameter values to move from speaking slowly to speaking fast. After training, the model achieved good fits in all three speaking rates. The parameter settings associated with each speaking rate were not linearly related, suggesting the presence of cognitive gaits. Thus, we provide the first computationally explicit account of the ability to modulate the speech production system to achieve different speaking styles.

    Additional information

    Supplemental material
  • Roelofs, A. (2003). Shared phonological encoding processes and representations of languages in bilingual speakers. Language and Cognitive Processes, 18(2), 175-204. doi:10.1080/01690960143000515.

    Abstract

    Four form-preparation experiments investigated whether aspects of phonological encoding processes and representations are shared between languages in bilingual speakers. The participants were Dutch--English bilinguals. Experiment 1 showed that the basic rightward incrementality revealed in studies for the first language is also observed for second-language words. In Experiments 2 and 3, speakers were given words to produce that did or did not share onset segments, and that came or did not come from different languages. It was found that when onsets were shared among the response words, those onsets were prepared, even when the words came from different languages. Experiment 4 showed that preparation requires prior knowledge of the segments and that knowledge about their phonological features yields no effect. These results suggest that both first- and second-language words are phonologically planned through the same serial order mechanism and that the representations of segments common to the languages are shared.
  • Roelofs, A. (2003). Goal-referenced selection of verbal action: Modeling attentional control in the Stroop task. Psychological Review, 110(1), 88-125.

    Abstract

    This article presents a new account of the color-word Stroop phenomenon ( J. R. Stroop, 1935) based on an implemented model of word production, WEAVER++ ( W. J. M. Levelt, A. Roelofs, & A. S. Meyer, 1999b; A. Roelofs, 1992, 1997c). Stroop effects are claimed to arise from processing interactions within the language-production architecture and explicit goal-referenced control. WEAVER++ successfully simulates 16 classic data sets, mostly taken from the review by C. M. MacLeod (1991), including incongruency, congruency, reverse-Stroop, response-set, semantic-gradient, time-course, stimulus, spatial, multiple-task, manual, bilingual, training, age, and pathological effects. Three new experiments tested the account against alternative explanations. It is shown that WEAVER++ offers a more satisfactory account of the data than other models.
  • Rojas-Berscia, L. M., Napurí, A., & Wang, L. (2020). Shawi (Chayahuita). Journal of the International Phonetic Association, 50(3), 417-430. doi:10.1017/S0025100318000415.

    Abstract

    Shawi1 is the language of the indigenous Shawi/Chayahuita people in Northwestern Amazonia, Peru. It belongs to the Kawapanan language family, together with its moribund sister language, Shiwilu. It is spoken by about 21,000 speakers (see Rojas-Berscia 2013) in the provinces of Alto Amazonas and Datem del Marañón in the region of Loreto and in the northern part of the region of San Martín, being one of the most vital languages in the country (see Figure 1).2 Although Shawi groups in the Upper Amazon were contacted by Jesuit missionaries during colonial times, the maintenance of their customs and language is striking. To date, most Shawi children are monolingual and have their first contact with Spanish at school. Yet, due to globalisation and the construction of highways by the Peruvian government, many Shawi villages are progressively westernising. This may result in the imminent loss of their indigenous culture and language.

    Additional information

    Supplementary material
  • Rossi, G. (2020). Other-repetition in conversation across languages: Bringing prosody into pragmatic typology. Language in Society, 49(4), 495-520. doi:10.1017/S0047404520000251.

    Abstract

    In this article, I introduce the aims and scope of a project examining other-repetition in natural conversation. This introduction provides the conceptual and methodological background for the five language-specific studies contained in this special issue, focussing on other-repetition in English, Finnish, French, Italian, and Swedish. Other-repetition is a recurrent conversational phenomenon in which a speaker repeats all or part of what another speaker has just said, typically in the next turn. Our project focusses particularly on other-repetitions that problematise what is being repeated and typically solicit a response. Previous research has shown that such repetitions can accomplish a range of conversational actions. But how do speakers of different languages distinguish these actions? In addressing this question, we put at centre stage the resources of prosody—the nonlexical acoustic-auditory features of speech—and bring its systematic analysis into the growing field of pragmatic typology—the comparative study of language use and conversational structure.
  • Rossi, G. (2020). The prosody of other-repetition in Italian: A system of tunes. Language in Society, 49(4), 619-652. doi:10.1017/S0047404520000627.

    Abstract

    As part of the project reported on in this special issue, the present study provides an overview of the types of action accomplished by other-repetition in Italian, with particular reference to the variety of the language spoken in the northeastern province of Trento. The analysis surveys actions within the domain of initiating repair, actions that extend beyond initiating repair, and actions that are alternative to initiating repair. Pitch contour emerges as a central design feature of other-repetition in Italian, with six nuclear contours associated with distinct types of action, sequential trajectories, and response patterns. The study also documents the interplay of pitch contour with other prosodic features (pitch span and register) and visible behavior (head nods, eyebrow movements).

    Additional information

    Sound clips.zip
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2003). Determinants of acquisition order in wh-questions: Re-evaluating the role of caregiver speech. Journal of Child Language, 30(3), 609-635. doi:10.1017/S0305000903005695.

    Abstract

    Accounts that specify semantic and/or syntactic complexity as the primary determinant of the order in which children acquire particular words or grammatical constructions have been highly influential in the literature on question acquisition. One explanation of wh-question acquisition in particular suggests that the order in which English speaking children acquire wh-questions is determined by two interlocking linguistic factors; the syntactic function of the wh-word that heads the question and the semantic generality (or ‘lightness’) of the main verb (Bloom, Merkin & Wootten, 1982; Bloom, 1991). Another more recent view, however, is that acquisition is influenced by the relative frequency with which children hear particular wh-words and verbs in their input (e.g. Rowland & Pine, 2000). In the present study over 300 hours of naturalistic data from twelve two- to three-year-old children and their mothers were analysed in order to assess the relative contribution of complexity and input frequency to wh-question acquisition. The analyses revealed, first, that the acquisition order of wh-questions could be predicted successfully from the frequency with which particular wh-words and verbs occurred in the children's input and, second, that syntactic and semantic complexity did not reliably predict acquisition once input frequency was taken into account. These results suggest that the relationship between acquisition and complexity may be a by-product of the high correlation between complexity and the frequency with which mothers use particular wh-words and verbs. We interpret the results in terms of a constructivist view of language acquisition.
  • Rowland, C. F., & Pine, J. M. (2003). The development of inversion in wh-questions: a reply to Van Valin. Journal of Child Language, 30(1), 197-212. doi:10.1017/S0305000902005445.

    Abstract

    Van Valin (Journal of Child Language29, 2002, 161–75) presents a critique of Rowland & Pine (Journal of Child Language27, 2000, 157–81) and argues that the wh-question data from Adam (in Brown, A first language, Cambridge, MA, 1973) cannot be explained in terms of input frequencies as we suggest. Instead, he suggests that the data can be more successfully accounted for in terms of Role and Reference Grammar. In this note we re-examine the pattern of inversion and uninversion in Adam's wh-questions and argue that the RRG explanation cannot account for some of the developmental facts it was designed to explain.
  • Rubio-Fernández, P., & Jara-Ettinger, J. (2020). Incrementality and efficiency shape pragmatics across languages. Proceedings of the National Academy of Sciences, 117, 13399-13404. doi:10.1073/pnas.1922067117.

    Abstract

    To correctly interpret a message, people must attend to the context in which it was produced. Here we investigate how this process, known as pragmatic reasoning, is guided by two universal forces in human communication: incrementality and efficiency, with speakers of all languages interpreting language incrementally and making the most efficient use of the incoming information. Crucially, however, the interplay between these two forces results in speakers of different languages having different pragmatic information available at each point in processing, including inferences about speaker intentions. In particular, the position of adjectives relative to nouns (e.g., “black lamp” vs. “lamp black”) makes visual context information available in reverse orders. In an eye-tracking study comparing four unrelated languages that have been understudied with regard to language processing (Catalan, Hindi, Hungarian, and Wolof), we show that speakers of languages with an adjective–noun order integrate context by first identifying properties (e.g., color, material, or size), whereas speakers of languages with a noun–adjective order integrate context by first identifying kinds (e.g., lamps or chairs). Most notably, this difference allows listeners of adjective–noun descriptions to infer the speaker’s intention when using an adjective (e.g., “the black…” as implying “not the blue one”) and anticipate the target referent, whereas listeners of noun–adjective descriptions are subject to temporary ambiguity when deriving the same interpretation. We conclude that incrementality and efficiency guide pragmatic reasoning across languages, with different word orders having different pragmatic affordances.
  • De Ruiter, J. P., Rossignol, S., Vuurpijl, L., Cunningham, D. W., & Levelt, W. J. M. (2003). SLOT: A research platform for investigating multimodal communication. Behavior Research Methods, Instruments, & Computers, 35(3), 408-419.

    Abstract

    In this article, we present the spatial logistics task (SLOT) platform for investigating multimodal communication between 2 human participants. Presented are the SLOT communication task and the software and hardware that has been developed to run SLOT experiments and record the participants’ multimodal behavior. SLOT offers a high level of flexibility in varying the context of the communication and is particularly useful in studies of the relationship between pen gestures and speech. We illustrate the use of the SLOT platform by discussing the results of some early experiments. The first is an experiment on negotiation with a one-way mirror between the participants, and the second is an exploratory study of automatic recognition of spontaneous pen gestures. The results of these studies demonstrate the usefulness of the SLOT platform for conducting multimodal communication research in both human– human and human–computer interactions.
  • Salverda, A. P., Dahan, D., & McQueen, J. M. (2003). The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension. Cognition, 90(1), 51-89. doi:10.1016/S0010-0277(03)00139-2.

    Abstract

    Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process.
  • Scharenborg, O., Ondel, L., Palaskar, S., Arthur, P., Ciannella, F., Du, M., Larsen, E., Merkx, D., Riad, R., Wang, L., Dupoux, E., Besacier, L., Black, A., Hasegawa-Johnson, M., Metze, F., Neubig, G., Stüker, S., Godard, P., & Müller, M. (2020). Speech technology for unwritten languages. IEEE/ACM Transactions on Audio, Speech and Language Processing, 28, 964-975. doi:10.1109/TASLP.2020.2973896.

    Abstract

    Speech technology plays an important role in our everyday life. Among others, speech is used for human-computer interaction, for instance for information retrieval and on-line shopping. In the case of an unwritten language, however, speech technology is unfortunately difficult to create, because it cannot be created by the standard combination of pre-trained speech-to-text and text-to-speech subsystems. The research presented in this article takes the first steps towards speech technology for unwritten languages. Specifically, the aim of this work was 1) to learn speech-to-meaning representations without using text as an intermediate representation, and 2) to test the sufficiency of the learned representations to regenerate speech or translated text, or to retrieve images that depict the meaning of an utterance in an unwritten language. The results suggest that building systems that go directly from speech-to-meaning and from meaning-to-speech, bypassing the need for text, is possible.
  • Scharenborg, O., ten Bosch, L., Boves, L., & Norris, D. (2003). Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition [Letter to the editor]. Journal of the Acoustical Society of America, 114, 3032-3035. doi:10.1121/1.1624065.

    Abstract

    This letter evaluates potential benefits of combining human speech recognition ~HSR! and automatic speech recognition by building a joint model of an automatic phone recognizer ~APR! and a computational model of HSR, viz., Shortlist @Norris, Cognition 52, 189–234 ~1994!#. Experiments based on ‘‘real-life’’ speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2003). ‘Early recognition’ of words in continuous speech. Automatic Speech Recognition and Understanding, 2003 IEEE Workshop, 61-66. doi:10.1109/ASRU.2003.1318404.

    Abstract

    In this paper, we present an automatic speech recognition (ASR) system based on the combination of an automatic phone recogniser and a computational model of human speech recognition – SpeM – that is capable of computing ‘word activations’ during the recognition process, in addition to doing normal speech recognition, a task in which conventional ASR architectures only provide output after the end of an utterance. We explain the notion of word activation and show that it can be used for ‘early recognition’, i.e. recognising a word before the end of the word is available. Our ASR system was tested on 992 continuous speech utterances, each containing at least one target word: a city name of at least two syllables. The results show that early recognition was obtained for 72.8% of the target words that were recognised correctly. Also, it is shown that word activation can be used as an effective confidence measure.
  • Schijven, D., Stevelink, R., McCormack, M., van Rheenen, W., Luykx, J. J., Koeleman, B. P., Veldink, J. H., Project MinE ALS GWAS Consortium, & International League Against Epilepsy Consortium on Complex Epilepsies (2020). Analysis of shared common genetic risk between amyotrophic lateral sclerosis and epilepsy. Neurobiology of Aging, 92, 153.e1-153.e5. doi:10.1016/j.neurobiolaging.2020.04.011.

    Abstract

    Because hyper-excitability has been shown to be a shared pathophysiological mechanism, we used the latest and largest genome-wide studies in amyotrophic lateral sclerosis (n = 36,052) and epilepsy (n = 38,349) to determine genetic overlap between these conditions. First, we showed no significant genetic correlation, also when binned on minor allele frequency. Second, we confirmed the absence of polygenic overlap using genomic risk score analysis. Finally, we did not identify pleiotropic variants in meta-analyses of the 2 diseases. Our findings indicate that amyotrophic lateral sclerosis and epilepsy do not share common genetic risk, showing that hyper-excitability in both disorders has distinct origins.

    Additional information

    1-s2.0-S0197458020301305-mmc1.docx
  • Schijven, D., Veldink, J. H., & Luykx, J. J. (2020). Genetic cross-disorder analysis in psychiatry: from methodology to clinical utility. The British Journal of Psychiatry, 216(5), 246-249. doi:10.1192/bjp.2019.72.

    Abstract

    SummaryGenome-wide association studies have uncovered hundreds of loci associated with psychiatric disorders. Cross-disorder studies are among the prime ramifications of such research. Here, we discuss the methodology of the most widespread methods and their clinical utility with regard to diagnosis, prediction, disease aetiology and treatment in psychiatry.Declaration of interestNone.
  • Schijven, D., Zinkstok, J. R., & Luykx, J. J. (2020). Van genetische bevindingen naar de klinische praktijk van de psychiater: Hoe genetica precisiepsychiatrie mogelijk kan maken. Tijdschrift voor Psychiatrie, 62(9), 776-783.
  • Schiller, N. O., Münte, T. F., Horemans, I., & Jansma, B. M. (2003). The influence of semantic and phonological factors on syntactic decisions: An event-related brain potential study. Psychophysiology, 40(6), 869-877. doi:10.1111/1469-8986.00105.

    Abstract

    During language production and comprehension, information about a word's syntactic properties is sometimes needed. While the decision about the grammatical gender of a word requires access to syntactic knowledge, it has also been hypothesized that semantic (i.e., biological gender) or phonological information (i.e., sound regularities) may influence this decision. Event-related potentials (ERPs) were measured while native speakers of German processed written words that were or were not semantically and/or phonologically marked for gender. Behavioral and ERP results showed that participants were faster in making a gender decision when words were semantically and/or phonologically gender marked than when this was not the case, although the phonological effects were less clear. In conclusion, our data provide evidence that even though participants performed a grammatical gender decision, this task can be influenced by semantic and phonological factors.
  • Schiller, N. O., Bles, M., & Jansma, B. M. (2003). Tracking the time course of phonological encoding in speech production: An event-related brain potential study on internal monitoring. Cognitive Brain Research, 17(3), 819-831. doi:10.1016/S0926-6410(03)00204-0.

    Abstract

    This study investigated the time course of phonological encoding during speech production planning. Previous research has shown that conceptual/semantic information precedes syntactic information in the planning of speech production and that syntactic information is available earlier than phonological information. Here, we studied the relative time courses of the two different processes within phonological encoding, i.e. metrical encoding and syllabification. According to one prominent theory of language production, metrical encoding involves the retrieval of the stress pattern of a word, while syllabification is carried out to construct the syllabic structure of a word. However, the relative timing of these two processes is underspecified in the theory. We employed an implicit picture naming task and recorded event-related brain potentials to obtain fine-grained temporal information about metrical encoding and syllabification. Results revealed that both tasks generated effects that fall within the time window of phonological encoding. However, there was no timing difference between the two effects, suggesting that they occur approximately at the same time.
  • Schiller, N. O., & Caramazza, A. (2003). Grammatical feature selection in noun phrase production: Evidence from German and Dutch. Journal of Memory and Language, 48(1), 169-194. doi:10.1016/S0749-596X(02)00508-9.

    Abstract

    In this study, we investigated grammatical feature selection during noun phrase production in German and Dutch. More specifically, we studied the conditions under which different grammatical genders select either the same or different determiners or suffixes. Pictures of one or two objects paired with a gender-congruent or a gender-incongruent distractor word were presented. Participants named the pictures using a singular or plural noun phrase with the appropriate determiner and/or adjective in German or Dutch. Significant effects of gender congruency were only obtained in the singular condition where the selection of determiners is governed by the target’s gender, but not in the plural condition where the determiner is identical for all genders. When different suffixes were to be selected in the gender-incongruent condition, no gender congruency effect was obtained. The results suggest that the so-called gender congruency effect is really a determiner congruency effect. The overall pattern of results is interpreted as indicating that grammatical feature selection is an automatic consequence of lexical node selection and therefore not subject to interference from other grammatical features. This implies that lexical node and grammatical feature selection operate with distinct principles.
  • Schoenmakers, G.-J. (2020). Freedom in the Dutch middle-field: Deriving discourse structure at the syntax-pragmatics interface. Glossa: a journal of general linguistics, 5(1): 114. doi:10.5334/gjgl.1307.

    Abstract

    This paper experimentally explores the optionality of Dutch scrambling structures with a definite object and an adverb. Most researchers argue that such structures are not freely interchangeable, but are subject to a strict discourse template. Existing analyses are based primarily on intuitions of the researchers, while experimental support is scarce. This paper reports on two experiments to gauge the existence of a strict discourse template. The discourse status of definite objects in scrambling clauses is first probed in a fill-in-the-blanks experiment and subsequently manipulated in a speeded judgment experiment. The results of these experiments indicate that scrambling is not as restricted as is commonly claimed. Although mismatches between surface order and pragmatic interpretation lead to a penalty in judgment rates and a rise in reaction times, they nonetheless occur in production and yield fully acceptable structures. Crucially, the penalties and delays emerge only in scrambling clauses with an adverb that is sensitive to focus placement. This paper argues that scrambling does not map onto discourse structure in the strict way proposed in most literature. Instead, a more complex syntax of deriving discourse relations is proposed which submits that the Dutch scrambling pattern results from two familiar processes which apply at the syntax-pragmatics interface: reconstruction and covert raising.
  • Seifart, F. (2003). Marqueurs de classe généraux et spécifiques en Miraña. Faits de Langues, 21, 121-132.
  • Seijdel, N., Tsakmakidis, N., De Haan, E. H. F., Bohte, S. M., & Scholte, H. S. (2020). Depth in convolutional neural networks solves scene segmentation. PLOS Computational Biology, 16: e1008022. doi:10.1371/journal.pcbi.1008022.

    Abstract

    Feed-forward deep convolutional neural networks (DCNNs) are, under specific conditions, matching and even surpassing human performance in object recognition in natural scenes. This performance suggests that the analysis of a loose collection of image features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Research in humans however suggests that while feedforward activity may suffice for sparse scenes with isolated objects, additional visual operations ('routines') that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicate that with an increase in network depth, there is an increase in the distinction between object- and background information. For more shallow networks, results indicated a benefit of training on segmented objects. Overall, these results indicate that, de facto, scene segmentation can be performed by a network of sufficient depth. We conclude that the human brain could perform scene segmentation in the context of object identification without an explicit mechanism, by selecting or “binding” features that belong to the object and ignoring other features, in a manner similar to a very deep convolutional neural network.
  • Seijdel, N., Jahfari, S., Groen, I. I. A., & Scholte, H. S. (2020). Low-level image statistics in natural scenes influence perceptual decision-making. Scientific Reports, 10: 10573. doi:10.1038/s41598-020-67661-8.

    Abstract

    A fundamental component of interacting with our environment is gathering and interpretation of sensory information. When investigating how perceptual information influences decision-making, most researchers have relied on manipulated or unnatural information as perceptual input, resulting in findings that may not generalize to real-world scenes. Unlike simplified, artificial stimuli, real-world scenes contain low-level regularities that are informative about the structural complexity, which the brain could exploit. In this study, participants performed an animal detection task on low, medium or high complexity scenes as determined by two biologically plausible natural scene statistics, contrast energy (CE) or spatial coherence (SC). In experiment 1, stimuli were sampled such that CE and SC both influenced scene complexity. Diffusion modelling showed that the speed of information processing was affected by low-level scene complexity. Experiment 2a/b refined these observations by showing how isolated manipulation of SC resulted in weaker but comparable effects, with an additional change in response boundary, whereas manipulation of only CE had no effect. Overall, performance was best for scenes with intermediate complexity. Our systematic definition quantifies how natural scene complexity interacts with decision-making. We speculate that CE and SC serve as an indication to adjust perceptual decision-making based on the complexity of the input.

    Additional information

    supplementary materials data code and data
  • Sekine, K., Schoechl, C., Mulder, K., Holler, J., Kelly, S., Furman, R., & Ozyurek, A. (2020). Evidence for children's online integration of simultaneous information from speech and iconic gestures: An ERP study. Language, Cognition and Neuroscience, 35(10), 1283-1294. doi:10.1080/23273798.2020.1737719.

    Abstract

    Children perceive iconic gestures, along with speech they hear. Previous studies have shown
    that children integrate information from both modalities. Yet it is not known whether children
    can integrate both types of information simultaneously as soon as they are available as adults
    do or processes them separately initially and integrate them later. Using electrophysiological
    measures, we examined the online neurocognitive processing of gesture-speech integration in
    6- to 7-year-old children. We focused on the N400 event-related potentials component which
    is modulated by semantic integration load. Children watched video clips of matching or
    mismatching gesture-speech combinations, which varied the semantic integration load. The
    ERPs showed that the amplitude of the N400 was larger in the mismatching condition than in
    the matching condition. This finding provides the first neural evidence that by the ages of 6
    or 7, children integrate multimodal semantic information in an online fashion comparable to
    that of adults.
  • Senft, G. (2020). “.. to grasp the native's point of view..” — A plea for a holistic documentation of the Trobriand Islanders' language, culture and cognition. Russian Journal of Linguistics, 24(1), 7-30. doi:10.22363/2687-0088-2020-24-1-7-30.

    Abstract

    In his famous introduction to his monograph “Argonauts of the Western Pacific” Bronislaw
    Malinowski (1922: 24f.) points out that a “collection of ethnographic statements, characteristic
    narratives, typical utterances, items of folk-lore and magical formulae has to be given as a corpus
    inscriptionum, as documents of native mentality”. This is one of the prerequisites to “grasp the
    native's point of view, his relation to life, to realize his vision of his world”. Malinowski managed
    to document a “Corpus Inscriptionum Agriculturae Quriviniensis” in his second volume of “Coral
    Gardens and their Magic” (1935 Vol II: 79-342). But he himself did not manage to come up with a
    holistic corpus inscriptionum for the Trobriand Islanders. One of the main aims I have been pursuing
    in my research on the Trobriand Islanders' language, culture, and cognition has been to fill this
    ethnolinguistic niche. In this essay, I report what I had to do to carry out this complex and ambitious
    project, what forms and kinds of linguistic and cultural competence I had to acquire, and how I
    planned my data collection during 16 long- and short-term field trips to the Trobriand Islands
    between 1982 and 2012. The paper ends with a critical assessment of my Trobriand endeavor.
  • Senft, G. (1985). Emic or etic or just another catch 22? A repartee to Hartmut Haberland. Journal of Pragmatics, 9, 845.
  • Senft, G. (2003). [Review of the book Representing space in Oceania: Culture in language and mind ed. by Giovanni Bennardo]. Journal of the Polynesian Society, 112, 169-171.
  • Senft, G. (1985). How to tell - and understand - a 'dirty' joke in Kilivila. Journal of Pragmatics, 9, 815-834.
  • Senft, G. (1985). Kilivila: Die Sprache der Trobriander. Studium Linguistik, 17/18, 127-138.
  • Senft, G. (1985). Klassifikationspartikel im Kilivila: Glossen zu ihrer morphologischen Rolle, ihrem Inventar und ihrer Funktion in Satz und Diskurs. Linguistische Berichte, 99, 373-393.
  • Senft, G. (1985). Weyeis Wettermagie: Eine ethnolinguistische Untersuchung von fünf magischen Formeln eines Wettermagiers auf den Trobriand Inseln. Zeitschrift für Ethnologie, 110(2), 67-90.
  • Senft, G. (1985). Trauer auf Trobriand: Eine ethnologisch/-linguistische Fallstudie. Anthropos, 80, 471-492.
  • Seuren, P. A. M. (1973). [Review of the book A comprehensive etymological dictionary of the English language by Ernst Klein]. Neophilologus, 57(4), 423-426. doi:10.1007/BF01515518.
  • Seuren, P. A. M. (1973). [Review of the book Philosophy of language by Robert J. Clack and Bertrand Russell]. Foundations of Language, 9(3), 440-441.
  • Seuren, P. A. M. (1973). [Review of the book Semantics. An interdisciplinary reader in philosophy, linguistics and psychology ed. by Danny D. Steinberg and Leon A. Jakobovits]. Neophilologus, 57(2), 198-213. doi:10.1007/BF01514332.
  • Seuren, P. A. M. (1963). Naar aanleiding van Dr. F. Balk-Smit Duyzentkunst "De Grammatische Functie". Levende Talen, 219, 179-186.
  • Seuren, P. A. M. (1980). The delimitation between semantics and pragmatics. Quaderni di Semantica, 1, 108-113; 126-134.
  • Seuren, P. A. M. (1980). Wat is taal? Cahiers Bio-Wetenschappen en Maatschappij, 6(4), 23-29.
  • Seuren, P. A. M. (1973). Zero-output rules. Foundations of Language, 10(2), 317-328.
  • Shao, Z., & Rommers, J. (2020). How a question context aids word production: Evidence from the picture–word interference paradigm. Quarterly Journal of Experimental Psychology, 73(2), 165-173. doi:10.1177/1747021819882911.

    Abstract

    Difficulties in saying the right word at the right time arise at least in part because multiple response candidates are simultaneously activated in the speaker’s mind. The word selection process has been simulated using the picture–word interference task, in which participants name pictures while ignoring a superimposed written distractor word. However, words are usually produced in context, in the service of achieving a communicative goal. Two experiments addressed the questions whether context influences word production, and if so, how. We embedded the picture–word interference task in a dialogue-like setting, in which participants heard a question and named a picture as an answer to the question while ignoring a superimposed distractor word. The conversational context was either constraining or nonconstraining towards the answer. Manipulating the relationship between the picture name and the distractor, we focused on two core processes of word production: retrieval of semantic representations (Experiment 1) and phonological encoding (Experiment 2). The results of both experiments showed that naming reaction times (RTs) were shorter when preceded by constraining contexts as compared with nonconstraining contexts. Critically, constraining contexts decreased the effect of semantically related distractors but not the effect of phonologically related distractors. This suggests that conversational contexts can help speakers with aspects of the meaning of to-be-produced words, but phonological encoding processes still need to be performed as usual.
  • Sharpe, V., Weber, K., & Kuperberg, G. R. (2020). Impairments in probabilistic prediction and Bayesian learning can explain reduced neural semantic priming in schizophrenia. Schizophrenia Bulletin, 46(6), 1558-1566. doi:10.1093/schbul/sbaa069.

    Abstract

    It has been proposed that abnormalities in probabilistic prediction and dynamic belief updating explain the multiple features of schizophrenia. Here, we used electroencephalography (EEG) to ask whether these abnormalities can account for the well-established reduction in semantic priming observed in schizophrenia under nonautomatic conditions. We isolated predictive contributions to the neural semantic priming effect by manipulating the prime’s predictive validity and minimizing retroactive semantic matching mechanisms. We additionally examined the link between prediction and learning using a Bayesian model that probed dynamic belief updating as participants adapted to the increase in predictive validity. We found that patients were less likely than healthy controls to use the prime to predictively facilitate semantic processing on the target, resulting in a reduced N400 effect. Moreover, the trial-by-trial output of our Bayesian computational model explained between-group differences in trial-by-trial N400 amplitudes as participants transitioned from conditions of lower to higher predictive validity. These findings suggest that, compared with healthy controls, people with schizophrenia are less able to mobilize predictive mechanisms to facilitate processing at the earliest stages of accessing the meanings of incoming words. This deficit may be linked to a failure to adapt to changes in the broader environment. This reciprocal relationship between impairments in probabilistic prediction and Bayesian learning/adaptation may drive a vicious cycle that maintains cognitive disturbances in schizophrenia.

    Additional information

    supplementary material
  • Shen, C., & Janse, E. (2020). Maximum speech performance and executive control in young adult speakers. Journal of Speech, Language, and Hearing Research, 63, 3611-3627. doi:10.1044/2020_JSLHR-19-00257.

    Abstract

    Purpose

    This study investigated whether maximum speech performance, more specifically, the ability to rapidly alternate between similar syllables during speech production, is associated with executive control abilities in a nonclinical young adult population.
    Method

    Seventy-eight young adult participants completed two speech tasks, both operationalized as maximum performance tasks, to index their articulatory control: a diadochokinetic (DDK) task with nonword and real-word syllable sequences and a tongue-twister task. Additionally, participants completed three cognitive tasks, each covering one element of executive control (a Flanker interference task to index inhibitory control, a letter–number switching task to index cognitive switching, and an operation span task to index updating of working memory). Linear mixed-effects models were fitted to investigate how well maximum speech performance measures can be predicted by elements of executive control.
    Results

    Participants' cognitive switching ability was associated with their accuracy in both the DDK and tongue-twister speech tasks. Additionally, nonword DDK accuracy was more strongly associated with executive control than real-word DDK accuracy (which has to be interpreted with caution). None of the executive control abilities related to the maximum rates at which participants performed the two speech tasks.
    Conclusion

    These results underscore the association between maximum speech performance and executive control (cognitive switching in particular).
  • Shin, J., Ma, S., Hofer, E., Patel, Y., Vosberg, D. E., Tilley, S., Roshchupkin, G. V., Sousa, A. M. M., Jian, X., Gottesman, R., Mosley, T. H., Fornage, M., Saba, Y., Pirpamer, L., Schmidt, R., Schmidt, H., Carrion Castillo, A., Crivello, F., Mazoyer, B., Bis, J. C. and 49 moreShin, J., Ma, S., Hofer, E., Patel, Y., Vosberg, D. E., Tilley, S., Roshchupkin, G. V., Sousa, A. M. M., Jian, X., Gottesman, R., Mosley, T. H., Fornage, M., Saba, Y., Pirpamer, L., Schmidt, R., Schmidt, H., Carrion Castillo, A., Crivello, F., Mazoyer, B., Bis, J. C., Li, S., Yang, Q., Luciano, M., Karama, S., Lewis, L., Bastin, M. E., Harris, M. A., Wardlaw, J. M., Deary, I. E., Scholz, M., Loeffler, M., Witte, A. V., Beyer, F., Villringer, A., Armstrong, N. F., Mather, K. A., Ames, D., Jiang, J., Kwok, J. B., Schofield, P. R., Thalamuthu, A., Trollor, J. N., Wright, M. J., Brodaty, H., Wen, W., Sachdev, P. S., Terzikhan, N., Evans, T. E., Adams, H. H. H. H., Ikram, M. A., Frenzel, S., Van der Auwera-Palitschka, S., Wittfeld, K., Bülow, R., Grabe, H. J., Tzourio, C., Mishra, A., Maingault, S., Debette, S., Gillespie, N. A., Franz, C. E., Kremen, W. S., Ding, L., Jahanshad, N., the ENIGMA Consortium, Sestan, N., Pausova, Z., Seshadri, S., Paus, T., & the neuroCHARGE Working Group (2020). Global and regional development of the human cerebral cortex: Molecular acrchitecture and occupational aptitudes. Cerebral Cortex, 30(7), 4121-4139. doi:10.1093/cercor/bhaa035.

    Abstract

    We have carried out meta-analyses of genome-wide association studies (GWAS) (n = 23 784) of the first two principal components (PCs) that group together cortical regions with shared variance in their surface area. PC1 (global) captured variations of most regions, whereas PC2 (visual) was specific to the primary and secondary visual cortices. We identified a total of 18 (PC1) and 17 (PC2) independent loci, which were replicated in another 25 746 individuals. The loci of the global PC1 included those associated previously with intracranial volume and/or general cognitive function, such as MAPT and IGF2BP1. The loci of the visual PC2 included DAAM1, a key player in the planar-cell-polarity pathway. We then tested associations with occupational aptitudes and, as predicted, found that the global PC1 was associated with General Learning Ability, and the visual PC2 was associated with the Form Perception aptitude. These results suggest that interindividual variations in global and regional development of the human cerebral cortex (and its molecular architecture) cascade—albeit in a very limited manner—to behaviors as complex as the choice of one’s occupation.
  • Sjerps, M. J., Decuyper, C., & Meyer, A. S. (2020). Initiation of utterance planning in response to pre-recorded and “live” utterances. Quarterly Journal of Experimental Psychology, 73(3), 357-374. doi:10.1177/1747021819881265.

    Abstract

    In everyday conversation, interlocutors often plan their utterances while listening to their conversational partners, thereby achieving short gaps between their turns. Important issues for current psycholinguistics are how interlocutors distribute their attention between listening and speech planning and how speech planning is timed relative to listening. Laboratory studies addressing these issues have used a variety of paradigms, some of which have involved using recorded speech to which participants responded, whereas others have involved interactions with confederates. This study investigated how this variation in the speech input affected the participants’ timing of speech planning. In Experiment 1, participants responded to utterances produced by a confederate, who sat next to them and looked at the same screen. In Experiment 2, they responded to recorded utterances of the same confederate. Analyses of the participants’ speech, their eye movements, and their performance in a concurrent tapping task showed that, compared with recorded speech, the presence of the confederate increased the processing load for the participants, but did not alter their global sentence planning strategy. These results have implications for the design of psycholinguistic experiments and theories of listening and speaking in dyadic settings.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2020). The role of iconicity and simultaneity for efficient communication: The case of Italian Sign Language (LIS). Cognition, 200: 104246. doi:10.1016/j.cognition.2020.104246.

    Abstract

    A fundamental assumption about language is that, regardless of language modality, it faces the linearization problem, i.e., an event that occurs simultaneously in the world has to be split in language to be organized on a temporal scale. However, the visual modality of signed languages allows its users not only to express meaning in a linear manner but also to use iconicity and multiple articulators together to encode information simultaneously. Accordingly, in cases when it is necessary to encode informatively rich events, signers can take advantage of simultaneous encoding in order to represent information about different referents and their actions simultaneously. This in turn would lead to more iconic and direct representation. Up to now, there has been no experimental study focusing on simultaneous encoding of information in signed languages and its possible advantage for efficient communication. In the present study, we assessed how many information units can be encoded simultaneously in Italian Sign Language (LIS) and whether the amount of simultaneously encoded information varies based on the amount of information that is required to be expressed. Twenty-three deaf adults participated in a director-matcher game in which they described 30 images of events that varied in amount of information they contained. Results revealed that as the information that had to be encoded increased, signers also increased use of multiple articulators to encode different information (i.e., kinematic simultaneity) and density of simultaneously encoded information in their production. Present findings show how the fundamental properties of signed languages, i.e., iconicity and simultaneity, are used for the purpose of efficient information encoding in Italian Sign Language (LIS).

    Additional information

    Supplementary data
  • Smits, R., Warner, N., McQueen, J. M., & Cutler, A. (2003). Unfolding of phonetic information over time: A database of Dutch diphone perception. Journal of the Acoustical Society of America, 113(1), 563-574. doi:10.1121/1.1525287.

    Abstract

    We present the results of a large-scale study on speech perception, assessing the number and type of perceptual hypotheses which listeners entertain about possible phoneme sequences in their language. Dutch listeners were asked to identify gated fragments of all 1179 diphones of Dutch, providing a total of 488 520 phoneme categorizations. The results manifest orderly uptake of acoustic information in the signal. Differences across phonemes in the rate at which fully correct recognition was achieved arose as a result of whether or not potential confusions could occur with other phonemes of the language ~long with short vowels, affricates with their initial components, etc.!. These data can be used to improve models of how acoustic phonetic information is mapped onto the mental lexicon during speech comprehension.
  • Snijders, T. M., Benders, T., & Fikkert, P. (2020). Infants segment words from songs - an EEG study. Brain Sciences, 10( 1): 39. doi:10.3390/brainsci10010039.

    Abstract

    Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech.
  • Sønderby, I. E., Gústafsson, Ó., Doan, N. T., Hibar, D. P., Martin-Brevet, S., Abdellaoui, A., Ames, D., Amunts, K., Andersson, M., Armstrong, N. J., Bernard, M., Blackburn, N., Blangero, J., Boomsma, D. I., Bralten, J., Brattbak, H.-R., Brodaty, H., Brouwer, R. M., Bülow, R., Calhoun, V. and 133 moreSønderby, I. E., Gústafsson, Ó., Doan, N. T., Hibar, D. P., Martin-Brevet, S., Abdellaoui, A., Ames, D., Amunts, K., Andersson, M., Armstrong, N. J., Bernard, M., Blackburn, N., Blangero, J., Boomsma, D. I., Bralten, J., Brattbak, H.-R., Brodaty, H., Brouwer, R. M., Bülow, R., Calhoun, V., Caspers, S., Cavalleri, G., Chen, C.-H., Cichon, S., Ciufolini, S., Corvin, A., Crespo-Facorro, B., Curran, J. E., Dale, A. M., Dalvie, S., Dazzan, P., De Geus, E. J. C., De Zubicaray, G. I., De Zwarte, S. M. C., Delanty, N., Den Braber, A., Desrivières, S., Donohoe, G., Draganski, B., Ehrlich, S., Espeseth, T., Fisher, S. E., Franke, B., Frouin, V., Fukunaga, M., Gareau, T., Glahn, D. C., Grabe, H., Groenewold, N. A., Haavik, J., Håberg, A., Hashimoto, R., Hehir-Kwa, J. Y., Heinz, A., Hillegers, M. H. J., Hoffmann, P., Holleran, L., Hottenga, J.-J., Hulshoff, H. E., Ikeda, M., Jahanshad, N., Jernigan, T., Jockwitz, C., Johansson, S., Jonsdottir, G. A., Jönsson, E. G., Kahn, R., Kaufmann, T., Kelly, S., Kikuchi, M., Knowles, E. E. M., Kolskår, K. K., Kwok, J. B., Le Hellard, S., Leu, C., Liu, J., Lundervold, A. J., Lundervold, A., Martin, N. G., Mather, K., Mathias, S. R., McCormack, M., McMahon, K. L., McRae, A., Milaneschi, Y., Moreau, C., Morris, D., Mothersill, D., Mühleisen, T. W., Murray, R., Nordvik, J. E., Nyberg, L., Olde Loohuis, L. M., Ophoff, R., Paus, T., Pausova, Z., Penninx, B., Peralta, J. M., Pike, B., Prieto, C., Pudas, S., Quinlan, E., Quintana, D. S., Reinbold, C. S., Reis Marques, T., Reymond, A., Richard, G., Rodriguez-Herreros, B., Roiz-Santiañez, R., Rokicki, J., Rucker, J., Sachdev, P., Sanders, A.-M., Sando, S. B., Schmaal, L., Schofield, P. R., Schork, A. J., Schumann, G., Shin, J., Shumskaya, E., Sisodiya, S., Steen, V. M., Stein, D. J., Steinberg, S., Strike, L., Teumer, A., Thalamuthu, A., Tordesillas-Gutierrez, D., Turner, J., Ueland, T., Uhlmann, A., Ulfarsson, M. O., Van 't Ent, D., Van der Meer, D., Van Haren, N. E. M., Vaskinn, A., Vassos, E., Walters, G. B., Wang, Y., Wen, W., Whelan, C. D., Wittfeld, K., Wright, M., Yamamori, H., Zayats, T., Agartz, I., Westlye, L. T., Jacquemont, S., Djurovic, S., Stefansson, H., Stefansson, K., Thompson, P., & Andreassen, O. A. (2020). Dose response of the 16p11.2 distal copy number variant on intracranial volume and basal ganglia. Molecular Psychiatry, 25, 584-602. doi:10.1038/s41380-018-0118-1.

    Abstract

    Carriers of large recurrent copy number variants (CNVs) have a higher risk of developing neurodevelopmental disorders. The 16p11.2 distal CNV predisposes carriers to e.g., autism spectrum disorder and schizophrenia. We compared subcortical brain volumes of 12 16p11.2 distal deletion and 12 duplication carriers to 6882 non-carriers from the large-scale brain Magnetic Resonance Imaging collaboration, ENIGMA-CNV. After stringent CNV calling procedures, and standardized FreeSurfer image analysis, we found negative dose-response associations with copy number on intracranial volume and on regional caudate, pallidum and putamen volumes (β = −0.71 to −1.37; P < 0.0005). In an independent sample, consistent results were obtained, with significant effects in the pallidum (β = −0.95, P = 0.0042). The two data sets combined showed significant negative dose-response for the accumbens, caudate, pallidum, putamen and ICV (P = 0.0032, 8.9 × 10−6, 1.7 × 10−9, 3.5 × 10−12 and 1.0 × 10−4, respectively). Full scale IQ was lower in both deletion and duplication carriers compared to non-carriers. This is the first brain MRI study of the impact of the 16p11.2 distal CNV, and we demonstrate a specific effect on subcortical brain structures, suggesting a neuropathological pattern underlying the neurodevelopmental syndromes
  • Speed, L. J., & Majid, A. (2020). Grounding language in the neglected senses of touch, taste, and smell. Cognitive Neuropsychology, 37(5-6), 363-392. doi:10.1080/02643294.2019.1623188.

    Abstract

    Grounded theories hold sensorimotor activation is critical to language processing. Such theories have focused predominantly on the dominant senses of sight and hearing. Relatively fewer studies have assessed mental simulation within touch, taste, and smell, even though they are critically implicated in communication for important domains, such as health and wellbeing. We review work that sheds light on whether perceptual activation from lesser studied modalities contribute to meaning in language. We critically evaluate data from behavioural, imaging, and cross-cultural studies. We conclude that evidence for sensorimotor simulation in touch, taste, and smell is weak. Comprehending language related to these senses may instead rely on simulation of emotion, as well as crossmodal simulation of the “higher” senses of vision and audition. Overall, the data suggest the need for a refinement of embodiment theories, as not all sensory modalities provide equally strong evidence for mental simulation.
  • Spinelli, E., McQueen, J. M., & Cutler, A. (2003). Processing resyllabified words in French. Journal of Memory and Language, 48(2), 233-254. doi:10.1016/S0749-596X(02)00513-2.
  • Stivers, T., Mangione-Smith, R., Elliott, M. N., McDonald, L., & Heritage, J. (2003). Why do physicians think parents expect antibiotics? What parents report vs what physicians believe. Journal of Family Practice, 52(2), 140-147.
  • Sumer, B., & Ozyurek, A. (2020). No effects of modality in development of locative expressions of space in signing and speaking children. Journal of Child Language, 47(6), 1101-1131. doi:10.1017/S0305000919000928.

    Abstract

    Linguistic expressions of locative spatial relations in sign languages are mostly visually- motivated representations of space involving mapping of entities and spatial relations between them onto the hands and the signing space. These are also morphologically complex forms. It is debated whether modality-specific aspects of spatial expressions modulate spatial language development differently in signing compared to speaking children. In a picture description task, we compared the use of locative expressions for containment, support and occlusion relations by deaf children acquiring Turkish Sign Language and hearing children acquiring Turkish (3;5-9;11 years). Unlike previous reports suggesting a boosting effect of iconicity, and / or a hindering effect of morphological complexity of the locative forms in sign languages, our results show similar developmental patterns for signing and speaking children's acquisition of these forms. Our results suggest the primacy of cognitive development guiding the acquisition of locative expressions by speaking and signing children.
  • Swaab, T., Brown, C. M., & Hagoort, P. (2003). Understanding words in sentence contexts: The time course of ambiguity resolution. Brain and Language, 86(2), 326-343. doi:10.1016/S0093-934X(02)00547-3.

    Abstract

    Spoken language comprehension requires rapid integration of information from multiple linguistic sources. In the present study we addressed the temporal aspects of this integration process by focusing on the time course of the selection of the appropriate meaning of lexical ambiguities (“bank”) in sentence contexts. Successful selection of the contextually appropriate meaning of the ambiguous word is dependent upon the rapid binding of the contextual information in the sentence to the appropriate meaning of the ambiguity. We used the N400 to identify the time course of this binding process. The N400 was measured to target words that followed three types of context sentences. In the concordant context, the sentence biased the meaning of the sentence-final ambiguous word so that it was related to the target. In the discordant context, the sentence context biased the meaning so that it was not related to the target. In the unrelated control condition, the sentences ended in an unambiguous noun that was unrelated to the target. Half of the concordant sentences biased the dominant meaning, and the other half biased the subordinate meaning of the sentence-final ambiguous words. The ISI between onset of the target word and offset of the sentence-final word of the context sentence was 100 ms in one version of the experiment, and 1250 ms in the second version. We found that (i) the lexically dominant meaning is always partly activated, independent of context, (ii) initially both dominant and subordinate meaning are (partly) activated, which suggests that contextual and lexical factors both contribute to sentence interpretation without context completely overriding lexical information, and (iii) strong lexical influences remain present for a relatively long period of time.
  • Swingley, D. (2003). Phonetic detail in the developing lexicon. Language and Speech, 46(3), 265-294.

    Abstract

    Although infants show remarkable sensitivity to linguistically relevant phonetic variation in speech, young children sometimes appear not to make use of this sensitivity. Here, children's knowledge of the sound-forms of familiar words was assessed using a visual fixation task. Dutch 19-month-olds were shown pairs of pictures and heard correct pronunciations and mispronunciations of familiar words naming one of the pictures. Mispronunciations were word-initial in Experiment 1 and word-medial in Experiment 2, and in both experiments involved substituting one segment with [d] (a common sound in Dutch) or [g] (a rare sound). In both experiments, word recognition performance was better for correct pronunciations than for mispronunciations involving either substituted consonant. These effects did not depend upon children's knowledge of lexical or nonlexical phonological neighbors of the tested words. The results indicate the encoding of phonetic detail in words at 19 months.
  • Swinney, D. A., Zurif, E. B., & Cutler, A. (1980). Effects of sentential stress and word class upon comprehension in Broca’s aphasics. Brain and Language, 10, 132-144. doi:10.1016/0093-934X(80)90044-9.

    Abstract

    The roles which word class (open/closed) and sentential stress play in the sentence comprehension processes of both agrammatic (Broca's) aphasics and normal listeners were examined with a word monitoring task. Overall, normal listeners responded more quickly to stressed than to unstressed items, but showed no effect of word class. Aphasics also responded more quickly to stressed than to unstressed materials, but, unlike the normals, responded faster to open than to closed class words regardless of their stress. The results are interpreted as support for the theory that Broca's aphasics lack the functional underlying open/closed class word distinction used in word recognition by normal listeners.
  • Takashima, A., Konopka, A. E., Meyer, A. S., Hagoort, P., & Weber, K. (2020). Speaking in the brain: The interaction between words and syntax in sentence production. Journal of Cognitive Neuroscience, 32(8), 1466-1483. doi:10.1162/jocn_a_01563.

    Abstract

    This neuroimaging study investigated the neural infrastructure of sentence-level language production. We compared brain activation patterns, as measured with BOLD-fMRI, during production of sentences that differed in verb argument structures (intransitives, transitives, ditransitives) and the lexical status of the verb (known verbs or pseudoverbs). The experiment consisted of 30 mini-blocks of six sentences each. Each mini-block started with an example for the type of sentence to be produced in that block. On each trial in the mini-blocks, participants were first given the (pseudo-)verb followed by three geometric shapes to serve as verb arguments in the sentences. Production of sentences with known verbs yielded greater activation compared to sentences with pseudoverbs in the core language network of the left inferior frontal gyrus, the left posterior middle temporalgyrus, and a more posterior middle temporal region extending into the angular gyrus, analogous to effects observed in language comprehension. Increasing the number of verb arguments led to greater activation in an overlapping left posterior middle temporal gyrus/angular gyrus area, particularly for known verbs, as well as in the bilateral precuneus. Thus, producing sentences with more complex structures using existing verbs leads to increased activation in the language network, suggesting some reliance on memory retrieval of stored lexical–syntactic information during sentence production. This study thus provides evidence from sentence-level language production in line with functional models of the language network that have so far been mainly based on single-word production, comprehension, and language processing in aphasia.
  • Tan, Y., & Hagoort, P. (2020). Catecholaminergic modulation of semantic processing in sentence comprehension. Cerebral Cortex, 30(12), 6426-6443. doi:10.1093/cercor/bhaa204.

    Abstract

    Catecholamine (CA) function has been widely implicated in cognitive functions that are tied to the prefrontal cortex and striatal areas. The present study investigated the effects of methylphenidate, which is a CA agonist, on the electroencephalogram (EEG) response related to semantic processing using a double-blind, placebo-controlled, randomized, crossover, within-subject design. Forty-eight healthy participants read semantically congruent or incongruent sentences after receiving 20-mg methylphenidate or a placebo while their brain activity was monitored with EEG. To probe whether the catecholaminergic modulation is task-dependent, in one condition participants had to focus on comprehending the sentences, while in the other condition, they only had to attend to the font size of the sentence. The results demonstrate that methylphenidate has a task-dependent effect on semantic processing. Compared to placebo, when semantic processing was task-irrelevant, methylphenidate enhanced the detection of semantic incongruence as indexed by a larger N400 amplitude in the incongruent sentences; when semantic processing was task-relevant, methylphenidate induced a larger N400 amplitude in the semantically congruent condition, which was followed by a larger late positive complex effect. These results suggest that CA-related neurotransmitters influence language processing, possibly through the projections between the prefrontal cortex and the striatum, which contain many CA receptors.
  • Ten Oever, S., Meierdierks, T., Duecker, F., De Graaf, T., & Sack, A. (2020). Phase-coded oscillatory ordering promotes the separation of closely matched representations to optimize perceptual discrimination. iScience, 23(7): 101282. doi:10.1016/j.isci.2020.101282.

    Abstract

    Low-frequency oscillations are proposed to be involved in separating neuronal representations belonging to different items. Although item-specific neuronal activity was found to cluster on different oscillatory phases, the influence of this mechanism on perception is unknown. Here, we investigated the perceptual consequences of neuronal item separation through oscillatory clustering. In an electroencephalographic experiment, participants categorized sounds parametrically varying in pitch, relative to an arbitrary pitch boundary. Pre-stimulus theta and alpha phase biased near-boundary sound categorization to one category or the other. Phase also modulated whether evoked neuronal responses contributed stronger to the fit of the sound envelope of one or another category. Intriguingly, participants with stronger oscillatory clustering (phase strongly biasing sound categorization) in the theta, but not alpha, range had steeper perceptual psychometric slopes (sharper sound category discrimination). These results indicate that neuronal sorting by phase directly influences subsequent perception and has a positive impact on discrimination performance

    Additional information

    Supplemental Information
  • Ten Oever, S., De Weerd, P., & Sack, A. T. (2020). Phase-dependent amplification of working memory content and performance. Nature Communications, 11: 1832. doi:10.1038/s41467-020-15629-7.

    Abstract

    Successful working memory performance has been related to oscillatory mechanisms operating in low-frequency ranges. Yet, their mechanistic interaction with the distributed neural activity patterns representing the content of the memorized information remains unclear. Here, we record EEG during a working memory retention interval, while a task-irrelevant, high-intensity visual impulse stimulus is presented to boost the read-out of distributed neural activity related to the content held in working memory. Decoding of this activity with a linear classifier reveals significant modulations of classification accuracy by oscillatory phase in the theta/alpha ranges at the moment of impulse presentation. Additionally, behavioral accuracy is highest at the phases showing maximized decoding accuracy. At those phases, behavioral accuracy is higher in trials with the impulse compared to no-impulse trials. This constitutes the first evidence in humans that working memory information is maximized within limited phase ranges, and that phase-selective, sensory impulse stimulation can improve working memory.
  • Teng, X., Ma, M., Yang, J., Blohm, S., Cai, Q., & Tian, X. (2020). Constrained structure of ancient Chinese poetry facilitates speech content grouping. Current Biology, 30, 1299-1305. doi:10.1016/j.cub.2020.01.059.

    Abstract

    Ancient Chinese poetry is constituted by structured language that deviates from ordinary language usage [1, 2]; its poetic genres impose unique combinatory constraints on linguistic elements [3]. How does the constrained poetic structure facilitate speech segmentation when common linguistic [4, 5, 6, 7, 8] and statistical cues [5, 9] are unreliable to listeners in poems? We generated artificial Jueju, which arguably has the most constrained structure in ancient Chinese poetry, and presented each poem twice as an isochronous sequence of syllables to native Mandarin speakers while conducting magnetoencephalography (MEG) recording. We found that listeners deployed their prior knowledge of Jueju to build the line structure and to establish the conceptual flow of Jueju. Unprecedentedly, we found a phase precession phenomenon indicating predictive processes of speech segmentation—the neural phase advanced faster after listeners acquired knowledge of incoming speech. The statistical co-occurrence of monosyllabic words in Jueju negatively correlated with speech segmentation, which provides an alternative perspective on how statistical cues facilitate speech segmentation. Our findings suggest that constrained poetic structures serve as a temporal map for listeners to group speech contents and to predict incoming speech signals. Listeners can parse speech streams by using not only grammatical and statistical cues but also their prior knowledge of the form of language.

    Additional information

    Supplemental Information
  • Ter Hark, S. E., Jamain, S., Schijven, D., Lin, B. D., Bakker, M. K., Boland-Auge, A., Deleuze, J.-F., Troudet, R., Malhotra, A. K., Gülöksüz, S., Vinkers, C. H., Ebdrup, B. H., Kahn, R. S., Leboyer, M., & Luykx, J. J. (2020). A new genetic locus for antipsychotic-induced weight gain: A genome-wide study of first-episode psychosis patients using amisulpride (from the OPTiMiSE cohort). Journal of Psychopharmacology, 34(5), 524-531. doi:10.1177/0269881120907972.

    Abstract

    Background:Antipsychotic-induced weight gain is a common and debilitating side effect of antipsychotics. Although genome-wide association studies of antipsychotic-induced weight gain have been performed, few genome-wide loci have been discovered. Moreover, these genome-wide association studies have included a wide variety of antipsychotic compounds.Aims:We aim to gain more insight in the genomic loci affecting antipsychotic-induced weight gain. Given the variable pharmacological properties of antipsychotics, we hypothesized that targeting a single antipsychotic compound would provide new clues about genomic loci affecting antipsychotic-induced weight gain.Methods:All subjects included for this genome-wide association study (n=339) were first-episode schizophrenia spectrum disorder patients treated with amisulpride and were minimally medicated (defined as antipsychotic use <2?weeks in the previous year and/or <6?weeks lifetime). Weight gain was defined as the increase in body mass index from before until approximately 1 month after amisulpride treatment.Results:Our genome-wide association analyses for antipsychotic-induced weight gain yielded one genome-wide significant hit (rs78310016; ?=1.05; p=3.66 ? 10?08; n=206) in a locus not previously associated with antipsychotic-induced weight gain or body mass index. Minor allele carriers had an odds ratio of 3.98 (p=1.0 ? 10?03) for clinically meaningful antipsychotic-induced weight gain (?7% of baseline weight). In silico analysis elucidated a chromatin interaction with 3-Hydroxy-3-Methylglutaryl-CoA Synthase 1. In an attempt to replicate single-nucleotide polymorphisms previously associated with antipsychotic-induced weight gain, we found none were associated with amisulpride-induced weight gain.Conclusion:Our findings suggest the involvement of rs78310016 and possibly 3-Hydroxy-3-Methylglutaryl-CoA Synthase 1 in antipsychotic-induced weight gain. In line with the unique binding profile of this atypical antipsychotic, our findings furthermore hint that biological mechanisms underlying amisulpride-induced weight gain differ from antipsychotic-induced weight gain by other atypical antipsychotics.
  • Terband, H., Rodd, J., & Maas, E. (2020). Testing hypotheses about the underlying deficit of Apraxia of Speech (AOS) through computational neural modelling with the DIVA model. International Journal of Speech-Language Pathology, 22(4), 475-486. doi:10.1080/17549507.2019.1669711.

    Abstract

    Purpose: A recent behavioural experiment featuring a noise masking paradigm suggests that Apraxia of Speech (AOS) reflects a disruption of feedforward control, whereas feedback control is spared and plays a more prominent role in achieving and maintaining segmental contrasts. The present study set out to validate the interpretation of AOS as a possible feedforward impairment using computational neural modelling with the DIVA (Directions Into Velocities of Articulators) model.

    Method: In a series of computational simulations with the DIVA model featuring a noise-masking paradigm mimicking the behavioural experiment, we investigated the effect of a feedforward, feedback, feedforward + feedback, and an upper motor neuron dysarthria impairment on average vowel spacing and dispersion in the production of six/bVt/speech targets.

    Result: The simulation results indicate that the output of the model with the simulated feedforward deficit resembled the group findings for the human speakers with AOS best.

    Conclusion: These results provide support to the interpretation of the human observations, corroborating the notion that AOS can be conceptualised as a deficit in feedforward control.
  • Terrill, A., & Dunn, M. (2003). Orthographic design in the Solomon Islands: The social, historical, and linguistic situation of Touo (Baniata). Written Language and Literacy, 6(2), 177-192. doi:10.1075/wll.6.2.03ter.

    Abstract

    This paper discusses the development of an orthography for the Touo language (Solomon Islands). Various orthographies have been proposed for this language in the past, and the paper discusses why they are perceived by the community to have failed. Current opinion about orthography development within the Touo-speaking community is divided along religious, political, and geographical grounds; and the development of a successful orthography must take into account a variety of opinions. The paper examines the social, historical, and linguistic obstacles that have hitherto prevented the development of an accepted Touo orthography, and presents a new proposal which has thus far gained acceptance with community leaders. The fundamental issue is that creating an orthography for a language takes place in a social, political, and historical context; and for an orthography to be acceptable for the speakers of a language, all these factors must be taken into account.
  • Terrill, A. (2003). Linguistic stratigraphy in the central Solomon Islands: Lexical evidence of early Papuan/Austronesian interaction. Journal of the Polynesian Society, 112(4), 369-401.

    Abstract

    The extent to which linguistic borrowing can be used to shed light on the existence and nature of early contact between Papuan and Oceanic speakers is examined. The question is addressed by taking one Papuan language, Lavukaleve, spoken in the Russell Islands, central Solomon Islands and examining lexical borrowings between it and nearby Oceanic languages, and with reconstructed forms of Proto Oceanic. Evidence from ethnography, culture history and archaeology, when added to the linguistic evidence provided in this study, indicates long-standing cultural links between other (non-Russell) islands. The composite picture is one of a high degree of cultural contact with little linguistic mixing, i.e., little or no changes affecting the structure of the languages and actually very little borrowed vocabulary.
  • Thompson, P. M., Jahanshad, N., Ching, C. R. K., Salminen, L. E., Thomopoulos, S. I., Bright, J., Baune, B. T., Bertolín, S., Bralten, J., Bruin, W. B., Bülow, R., Chen, J., Chye, Y., Dannlowski, U., De Kovel, C. G. F., Donohoe, G., Eyler, L. T., Faraone, S. V., Favre, P., Filippi, C. A. and 151 moreThompson, P. M., Jahanshad, N., Ching, C. R. K., Salminen, L. E., Thomopoulos, S. I., Bright, J., Baune, B. T., Bertolín, S., Bralten, J., Bruin, W. B., Bülow, R., Chen, J., Chye, Y., Dannlowski, U., De Kovel, C. G. F., Donohoe, G., Eyler, L. T., Faraone, S. V., Favre, P., Filippi, C. A., Frodl, T., Garijo, D., Gil, Y., Grabe, H. J., Grasby, K. L., Hajek, T., Han, L. K. M., Hatton, S. N., Hilbert, K., Ho, T. C., Holleran, L., Homuth, G., Hosten, N., Houenou, J., Ivanov, I., Jia, T., Kelly, S., Klein, M., Kwon, J. S., Laansma, M. A., Leerssen, J., Lueken, U., Nunes, A., O'Neill, J., Opel, N., Piras, F., Piras, F., Postema, M., Pozzi, E., Shatokhina, N., Soriano-Mas, C., Spalletta, G., Sun, D., Teumer, A., Tilot, A. K., Tozzi, L., Van der Merwe, C., Van Someren, E. J. W., Van Wingen, G. A., Völzke, H., Walton, E., Wang, L., Winkler, A. M., Wittfeld, K., Wright, M. J., Yun, J.-Y., Zhang, G., Zhang-James, Y., Adhikari, B. M., Agartz, I., Aghajani, M., Aleman, A., Althoff, R. R., Altmann, A., Andreassen, O. A., Baron, D. A., Bartnik-Olson, B. L., Bas-Hoogendam, J. M., Baskin-Sommers, A. R., Bearden, C. E., Berner, L. A., Boedhoe, P. S. W., Brouwer, R. M., Buitelaar, J. K., Caeyenberghs, K., Cecil, C. A. M., Cohen, R. A., Cole, J. H., Conrod, P. J., De Brito, S. A., De Zwarte, S. M. C., Dennis, E. L., Desrivieres, S., Dima, D., Ehrlich, S., Esopenko, C., Fairchild, G., Fisher, S. E., Fouche, J.-P., Francks, C., Frangou, S., Franke, B., Garavan, H. P., Glahn, D. C., Groenewold, N. A., Gurholt, T. P., Gutman, B. A., Hahn, T., Harding, I. H., Hernaus, D., Hibar, D. P., Hillary, F. G., Hoogman, M., Hulshoff Pol, H. E., Jalbrzikowski, M., Karkashadze, G. A., Klapwijk, E. T., Knickmeyer, R. C., Kochunov, P., Koerte, I. K., Kong, X., Liew, S.-L., Lin, A. P., Logue, M. W., Luders, E., Macciardi, F., Mackey, S., Mayer, A. R., McDonald, C. R., McMahon, A. B., Medland, S. E., Modinos, G., Morey, R. A., Mueller, S. C., Mukherjee, P., Namazova-Baranova, L., Nir, T. M., Olsen, A., Paschou, P., Pine, D. S., Pizzagalli, F., Rentería, M. E., Rohrer, J. D., Sämann, P. G., Schmaal, L., Schumann, G., Shiroishi, M. S., Sisodiya, S. M., Smit, D. J. A., Sønderby, I. E., Stein, D. J., Stein, J. L., Tahmasian, M., Tate, D. F., Turner, J. A., Van den Heuvel, O. A., Van der Wee, N. J. A., Van der Werf, Y. D., Van Erp, T. G. M., Van Haren, N. E. M., Van Rooij, D., Van Velzen, L. S., Veer, I. M., Veltman, D. J., Villalon-Reina, J. E., Walter, H., Whelan, C. D., Wilde, E. A., Zarei, M., Zelman, V., & Enigma Consortium (2020). ENIGMA and global neuroscience: A decade of large-scale studies of the brain in health and disease across more than 40 countries. Translational Psychiatry, 10(1): 100. doi:10.1038/s41398-020-0705-1.

    Abstract

    This review summarizes the last decade of work by the ENIGMA (Enhancing NeuroImaging Genetics through Meta Analysis) Consortium, a global alliance of over 1400 scientists across 43 countries, studying the human brain in health and disease. Building on large-scale genetic studies that discovered the first robustly replicated genetic loci associated with brain metrics, ENIGMA has diversified into over 50 working groups (WGs), pooling worldwide data and expertise to answer fundamental questions in neuroscience, psychiatry, neurology, and genetics. Most ENIGMA WGs focus on specific psychiatric and neurological conditions, other WGs study normal variation due to sex and gender differences, or development and aging; still other WGs develop methodological pipelines and tools to facilitate harmonized analyses of “big data” (i.e., genetic and epigenetic data, multimodal MRI, and electroencephalography data). These international efforts have yielded the largest neuroimaging studies to date in schizophrenia, bipolar disorder, major depressive disorder, post-traumatic stress disorder, substance use disorders, obsessive-compulsive disorder, attention-deficit/hyperactivity disorder, autism spectrum disorders, epilepsy, and 22q11.2 deletion syndrome. More recent ENIGMA WGs have formed to study anxiety disorders, suicidal thoughts and behavior, sleep and insomnia, eating disorders, irritability, brain injury, antisocial personality and conduct disorder, and dissociative identity disorder. Here, we summarize the first decade of ENIGMA’s activities and ongoing projects, and describe the successes and challenges encountered along the way. We highlight the advantages of collaborative large-scale coordinated data analyses for testing reproducibility and robustness of findings, offering the opportunity to identify brain systems involved in clinical syndromes across diverse samples and associated genetic, environmental, demographic, cognitive, and psychosocial factors.

    Additional information

    41398_2020_705_MOESM1_ESM.pdf
  • Thompson, P. A., Bishop, D. V. M., Eising, E., Fisher, S. E., & Newbury, D. F. (2020). Generalized Structured Component Analysis in candidate gene association studies: Applications and limitations [version 2; peer review: 3 approved]. Wellcome Open Research, 4: 142. doi:10.12688/wellcomeopenres.15396.2.

    Abstract

    Background: Generalized Structured Component Analysis (GSCA) is a component-based alternative to traditional covariance-based structural equation modelling. This method has previously been applied to test for association between candidate genes and clinical phenotypes, contrasting with traditional genetic association analyses that adopt univariate testing of many individual single nucleotide polymorphisms (SNPs) with correction for multiple testing.
    Methods: We first evaluate the ability of the GSCA method to replicate two previous findings from a genetics association study of developmental language disorders. We then present the results of a simulation study to test the validity of the GSCA method under more restrictive data conditions, using smaller sample sizes and larger numbers of SNPs than have previously been investigated. Finally, we compare GSCA performance against univariate association analysis conducted using PLINK v1.9.
    Results: Results from simulations show that power to detect effects depends not just on sample size, but also on the ratio of SNPs with effect to number of SNPs tested within a gene. Inclusion of many SNPs in a model dilutes true effects.
    Conclusions: We propose that GSCA is a useful method for replication studies, when candidate SNPs have been identified, but should not be used for exploratory analysis.

    Additional information

    data via OSF
  • Todorova, L., & Neville, D. A. (2020). Associative and identity words promote the speed of visual categorization: A hierarchical drift diffusion account. Frontiers in Psychology, 11: 955. doi:10.3389/fpsyg.2020.00955.

    Abstract

    Words can either boost or hinder the processing of visual information, which can lead to facilitation or interference of the behavioral response. We investigated the stage (response execution or target processing) of verbal interference/facilitation in the response priming paradigm with a gender categorization task. Participants in our study were asked to judge whether the presented stimulus was a female or male face that was briefly preceded by a gender word either congruent (prime: “man,” target: “man”), incongruent (prime: “woman,” target: “man”) or neutral (prime: “day,” target: “man”) with respect to the face stimulus. We investigated whether related word-picture pairs resulted in faster reaction times in comparison to the neutral word-picture pairs (facilitation) and whether unrelated word-picture pairs resulted in slower reaction times in comparison to neutral word-picture pairs (interference). We further examined whether these effects (if any) map onto response conflict or aspects of target processing. In addition, identity (“man,” “woman”) and associative (“tie,” “dress”) primes were introduced to investigate the cognitive mechanisms of semantic and Stroop-like effects in response priming (introduced respectively by associations and identity words). We analyzed responses and reaction times using the drift diffusion model to examine the effect of facilitation and/or interference as a function of the prime type. We found that regardless of prime type words introduce a facilitatory effect, which maps to the processes of visual attention and response execution.
  • Todorova, L., Neville, D. A., & Piai, V. (2020). Lexical-semantic and executive deficits revealed by computational modelling: A drift diffusion model perspective. Neuropsychologia, 146: 107560. doi:10.1016/j.neuropsychologia.2020.107560.

    Abstract

    Flexible language use requires coordinated functioning of two systems: conceptual representations and control. The interaction between the two systems can be observed when people are asked to match a word to a picture. Participants are slower and less accurate for related word-picture pairs (word: banana, picture: apple) relative to unrelated pairs (word: banjo, picture: apple). The mechanism underlying interference however is still unclear. We analyzed word-picture matching (WPM) performance of patients with stroke-induced lesions to the left-temporal (N = 5) or left-frontal cortex (N = 5) and matched controls (N = 12) using the drift diffusion model (DDM). In DDM, the process of making a decision is described as the stochastic accumulation of evidence towards a response. The parameters of the DDM model that characterize this process are decision threshold, drift rate, starting point and non-decision time, each of which bears cognitive interpretability. We compared the estimated model parameters from controls and patients to investigate the mechanisms of WPM interference. WPM performance in controls was explained by the amount of information needed to make a decision (decision threshold): a higher threshold was associated with related word-picture pairs relative to unrelated ones. No difference was found in the quality of the evidence (drift rate). This suggests an executive rather than semantic mechanism underlying WPM interference. Both patients with temporal and frontal lesions exhibited both increased drift rate and decision threshold for unrelated pairs relative to related ones. Left-frontal and temporal damage affected the computations required by WPM similarly, resulting in systematic deficits across lexical-semantic memory and executive functions. These results support a diverse but interactive role of lexical-semantic memory and semantic control mechanisms.

    Additional information

    supplementary material
  • Trujillo, J. P., Simanova, I., Bekkering, H., & Ozyurek, A. (2020). The communicative advantage: How kinematic signaling supports semantic comprehension. Psychological Research, 84, 1897-1911. doi:10.1007/s00426-019-01198-y.

    Abstract

    Humans are unique in their ability to communicate information through representational gestures which visually simulate an action (eg. moving hands as if opening a jar). Previous research indicates that the intention to communicate modulates the kinematics (e.g., velocity, size) of such gestures. If and how this modulation influences addressees’ comprehension of gestures have not been investigated. Here we ask whether communicative kinematic modulation enhances semantic comprehension (i.e., identification) of gestures. We additionally investigate whether any comprehension advantage is due to enhanced early identification or late identification. Participants (n = 20) watched videos of representational gestures produced in a more- (n = 60) or less-communicative (n = 60) context and performed a forced-choice recognition task. We tested the isolated role of kinematics by removing visibility of actor’s faces in Experiment I, and by reducing the stimuli to stick-light figures in Experiment II. Three video lengths were used to disentangle early identification from late identification. Accuracy and response time quantified main effects. Kinematic modulation was tested for correlations with task performance. We found higher gesture identification performance in more- compared to less-communicative gestures. However, early identification was only enhanced within a full visual context, while late identification occurred even when viewing isolated kinematics. Additionally, temporally segmented acts with more post-stroke holds were associated with higher accuracy. Our results demonstrate that communicative signaling, interacting with other visual cues, generally supports gesture identification, while kinematic modulation specifically enhances late identification in the absence of other cues. Results provide insights into mutual understanding processes as well as creating artificial communicative agents.

    Additional information

    Supplementary material
  • Trujillo, J. P., Simanova, I., Ozyurek, A., & Bekkering, H. (2020). Seeing the unexpected: How brains read communicative intent through kinematics. Cerebral Cortex, 30(3), 1056-1067. doi:10.1093/cercor/bhz148.

    Abstract

    Social interaction requires us to recognize subtle cues in behavior, such as kinematic differences in actions and gestures produced with different social intentions. Neuroscientific studies indicate that the putative mirror neuron system (pMNS) in the premotor cortex and mentalizing system (MS) in the medial prefrontal cortex support inferences about contextually unusual actions. However, little is known regarding the brain dynamics of these systems when viewing communicatively exaggerated kinematics. In an event-related functional magnetic resonance imaging experiment, 28 participants viewed stick-light videos of pantomime gestures, recorded in a previous study, which contained varying degrees of communicative exaggeration. Participants made either social or nonsocial classifications of the videos. Using participant responses and pantomime kinematics, we modeled the probability of each video being classified as communicative. Interregion connectivity and activity were modulated by kinematic exaggeration, depending on the task. In the Social Task, communicativeness of the gesture increased activation of several pMNS and MS regions and modulated top-down coupling from the MS to the pMNS, but engagement of the pMNS and MS was not found in the nonsocial task. Our results suggest that expectation violations can be a key cue for inferring communicative intention, extending previous findings from wholly unexpected actions to more subtle social signaling.
  • Tsuji, S., Cristia, A., Frank, M. C., & Bergmann, C. (2020). Addressing publication bias in Meta-Analysis: Empirical findings from community-augmented meta-analyses of infant language development. Zeitschrift für Psychologie, 228(1), 50-61. doi:10.1027/2151-2604/a000393.

    Abstract

    Meta-analyses are an indispensable research synthesis tool for characterizing bodies of literature and advancing theories. One important open question concerns the inclusion of unpublished data into meta-analyses. Finding such studies can be effortful, but their exclusion potentially leads to consequential biases like overestimation of a literature’s mean effect. We address two questions about unpublished data using MetaLab, a collection of community-augmented meta-analyses focused on developmental psychology. First, we assess to what extent MetaLab datasets include gray literature, and by what search strategies they are unearthed. We find that an average of 11% of datapoints are from unpublished literature; standard search strategies like database searches, complemented with individualized approaches like including authors’ own data, contribute the majority of this literature. Second, we analyze the effect of including versus excluding unpublished literature on estimates of effect size and publication bias, and find this decision does not affect outcomes. We discuss lessons learned and implications.

    Additional information

    Link to Dataset on PsychArchives
  • Tulling, M., Law, R., Cournane, A., & Pylkkänen, L. (2020). Neural correlates of modal displacement and discourse-updating under (un)certainty. eNeuro, 8(1): 0290-20.2020. doi:10.1523/ENEURO.0290-20.2020.

    Abstract

    A hallmark of human thought is the ability to think about not just the actual world, but also about alternative ways the world could be. One way to study this contrast is through language. Language has grammatical devices for expressing possibilities and necessities, such as the words might or must. With these devices, called “modal expressions,” we can study the actual vs. possible contrast in a highly controlled way. While factual utterances such as “There is a monster under my bed” update the here-and-now of a discourse model, a modal version of this sentence, “There might be a monster under my bed,” displaces from the here-and-now and merely postulates a possibility. We used magnetoencephalography (MEG) to test whether the processes of discourse updating and modal displacement dissociate in the brain. Factual and modal utterances were embedded in short narratives, and across two experiments, factual expressions increased the measured activity over modal expressions. However, the localization of the increase appeared to depend on perspective: signal localizing in right temporo-parietal areas increased when updating others’ beliefs, while frontal medial areas seem sensitive to updating one’s own beliefs. The presence of modal displacement did not elevate MEG signal strength in any of our analyses. In sum, this study identifies potential neural signatures of the process by which facts get added to our mental representation of the world.Competing Interest StatementThe authors have declared no competing interest.

    Additional information

    Link to Preprint on BioRxiv
  • Ullas, S., Formisano, E., Eisner, F., & Cutler, A. (2020). Interleaved lexical and audiovisual information can retune phoneme boundaries. Attention, Perception & Psychophysics, 82, 2018-2026. doi:10.3758/s13414-019-01961-8.

    Abstract

    To adapt to situations in which speech perception is difficult, listeners can adjust boundaries between phoneme categories using perceptual learning. Such adjustments can draw on lexical information in surrounding speech, or on visual cues via speech-reading. In the present study, listeners proved they were able to flexibly adjust the boundary between two plosive/stop consonants, /p/-/t/, using both lexical and speech-reading information and given the same experimental design for both cue types. Videos of a speaker pronouncing pseudo-words and audio recordings of Dutch words were presented in alternating blocks of either stimulus type. Listeners were able to switch between cues to adjust phoneme boundaries, and resulting effects were comparable to results from listeners receiving only a single source of information. Overall, audiovisual cues (i.e., the videos) produced the stronger effects, commensurate with their applicability for adapting to noisy environments. Lexical cues were able to induce effects with fewer exposure stimuli and a changing phoneme bias, in a design unlike most prior studies of lexical retuning. While lexical retuning effects were relatively weaker compared to audiovisual recalibration, this discrepancy could reflect how lexical retuning may be more suitable for adapting to speakers than to environments. Nonetheless, the presence of the lexical retuning effects suggests that it may be invoked at a faster rate than previously seen. In general, this technique has further illuminated the robustness of adaptability in speech perception, and offers the potential to enable further comparisons across differing forms of perceptual learning.
  • Ullas, S., Formisano, E., Eisner, F., & Cutler, A. (2020). Audiovisual and lexical cues do not additively enhance perceptual adaptation. Psychonomic Bulletin & Review, 27, 707-715. doi:10.3758/s13423-020-01728-5.

    Abstract

    When listeners experience difficulty in understanding a speaker, lexical and audiovisual (or lipreading) information can be a helpful source of guidance. These two types of information embedded in speech can also guide perceptual adjustment, also
    known as recalibration or perceptual retuning. With retuning or recalibration, listeners can use these contextual cues to temporarily or permanently reconfigure internal representations of phoneme categories to adjust to and understand novel interlocutors more easily. These two types of perceptual learning, previously investigated in large part separately, are highly similar in allowing listeners to use speech-external information to make phoneme boundary adjustments. This study explored whether the two sources may work in conjunction to induce adaptation, thus emulating real life, in which listeners are indeed likely to encounter both types of cue together. Listeners who received combined audiovisual and lexical cues showed perceptual learning effects
    similar to listeners who only received audiovisual cues, while listeners who received only lexical cues showed weaker effects compared with the two other groups. The combination of cues did not lead to additive retuning or recalibration effects, suggesting that lexical and audiovisual cues operate differently with regard to how listeners use them for reshaping perceptual categories.
    Reaction times did not significantly differ across the three conditions, so none of the forms of adjustment were either aided or
    hindered by processing time differences. Mechanisms underlying these forms of perceptual learning may diverge in numerous ways despite similarities in experimental applications.

    Additional information

    Data and materials
  • Ullas, S., Hausfeld, L., Cutler, A., Eisner, F., & Formisano, E. (2020). Neural correlates of phonetic adaptation as induced by lexical and audiovisual context. Journal of Cognitive Neuroscience, 32(11), 2145-2158. doi:10.1162/jocn_a_01608.

    Abstract

    When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio–video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.
  • Ünal, E., & Papafragou, A. (2020). Relations between language and cognition: Evidentiality and sources of knowledge. Topics in Cognitive Science, 12(1), 115-135. doi:10.1111/tops.12355.

    Abstract

    Understanding and acquiring language involve mapping language onto conceptual representations. Nevertheless, several issues remain unresolved with respect to (a) how such mappings are performed, and (b) whether conceptual representations are susceptible to cross‐linguistic influences. In this article, we discuss these issues focusing on the domain of evidentiality and sources of knowledge. Empirical evidence in this domain yields growing support for the proposal that linguistic categories of evidentiality are tightly linked to, build on, and reflect conceptual representations of sources of knowledge that are shared across speakers of different languages.
  • Urbanus, B. H. A., Peter, S., Fisher, S. E., & De Zeeuw, C. I. (2020). Region-specific Foxp2 deletions in cortex, striatum or cerebellum cannot explain vocalization deficits observed in spontaneous global knockouts. Scientific Reports, 10: 21631. doi:10.1038/s41598-020-78531-8.

    Abstract

    FOXP2 has been identified as a gene related to speech in humans, based on rare mutations that yield significant impairments in speech at the level of both motor performance and language comprehension. Disruptions of the murine orthologue Foxp2 in mouse pups have been shown to interfere with production of ultrasonic vocalizations (USVs). However, it remains unclear which structures are responsible for these deficits. Here, we show that conditional knockout mice with selective Foxp2 deletions targeting the cerebral cortex, striatum or cerebellum, three key sites of motor control with robust neural gene expression, do not recapture the profile of pup USV deficits observed in mice with global disruptions of this gene. Moreover, we observed that global Foxp2 knockout pups show substantive reductions in USV production as well as an overproduction of short broadband noise “clicks”, which was not present in the brain region-specific knockouts. These data indicate that deficits of Foxp2 expression in the cortex, striatum or cerebellum cannot solely explain the disrupted vocalization behaviours in global Foxp2 knockouts. Our findings raise the possibility that the impact of Foxp2 disruption on USV is mediated at least in part by effects of this gene on the anatomical prerequisites for vocalizing.
  • Van Turennout, M., Bielamowicz, L., & Martin, A. (2003). Modulation of neural activity during object naming: Effects of time and practice. Cerebral Cortex, 13(4), 381-391.

    Abstract

    Repeated exposure to objects improves our ability to identify and name them, even after a long delay. Previous brain imaging studies have demonstrated that this experience-related facilitation of object naming is associated with neural changes in distinct brain regions. We used event-related functional magnetic resonance imaging (fMRI) to examine the modulation of neural activity in the object naming system as a function of experience and time. Pictures of common objects were presented repeatedly for naming at different time intervals (1 h, 6 h and 3 days) before scanning, or at 30 s intervals during scanning. The results revealed that as objects became more familiar with experience, activity in occipitotemporal and left inferior frontal regions decreased while activity in the left insula and basal ganglia increased. In posterior regions, reductions in activity as a result of multiple repetitions did not interact with time, whereas in left inferior frontal cortex larger decreases were observed when repetitions were spaced out over time. This differential modulation of activity in distinct brain regions provides support for the idea that long-lasting object priming is mediated by two neural mechanisms. The first mechanism may involve changes in object-specific representations in occipitotemporal cortices, the second may be a form of procedural learning involving a reorganization in brain circuitry that leads to more efficient name retrieval.
  • Van Berkum, J. J. A., Zwitserlood, P., Hagoort, P., & Brown, C. M. (2003). When and how do listeners relate a sentence to the wider discourse? Evidence from the N400 effect. Cognitive Brain Research, 17(3), 701-718. doi:10.1016/S0926-6410(03)00196-4.

    Abstract

    In two ERP experiments, we assessed the impact of discourse-level information on the processing of an unfolding spoken sentence. Subjects listened to sentences like Jane told her brother that he was exceptionally quick/slow, designed such that the alternative critical words were equally acceptable within the local sentence context. In Experiment 1, these sentences were embedded in a discourse that rendered one of the critical words anomalous (e.g. because Jane’s brother had in fact done something very quickly). Relative to the coherent alternative, these discourse-anomalous words elicited a standard N400 effect that started at 150–200 ms after acoustic word onset. Furthermore, when the same sentences were heard in isolation in Experiment 2, the N400 effect disappeared. The results demonstrate that our listeners related the unfolding spoken words to the wider discourse extremely rapidly, after having heard the first two or three phonemes only, and in many cases well before the end of the word. In addition, the identical nature of discourse- and sentence-dependent N400 effects suggests that from the perspective of the word-elicited comprehension process indexed by the N400, the interpretive context delineated by a single unfolding sentence and a larger discourse is functionally identical.
  • Van der Meer, D., Sønderby, I. E., Kaufmann, T., Walters, G. B., Abdellaoui, A., Ames, D., Amunts, K., Andersson, M., Armstrong, N. J., Bernard, M., Blackburn, N. B., Blangero, J., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Bülow, R., Cahn, W., Calhoun, V. D., Caspers, S., Cavalleri, G. L. and 112 moreVan der Meer, D., Sønderby, I. E., Kaufmann, T., Walters, G. B., Abdellaoui, A., Ames, D., Amunts, K., Andersson, M., Armstrong, N. J., Bernard, M., Blackburn, N. B., Blangero, J., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Bülow, R., Cahn, W., Calhoun, V. D., Caspers, S., Cavalleri, G. L., Ching, C. R. K., Cichon, S., Ciufolini, S., Corvin, A., Crespo-Facorro, B., Curran, J. E., Dalvie, S., Dazzan, P., De Geus, E. J. C., De Zubicaray, G. I., De Zwarte, S. M. C., Delanty, N., Den Braber, A., Desrivieres, S., Di Forti, M., Doherty, J. L., Donohoe, G., Ehrlich, S., Eising, E., Espeseth, T., Fisher, S. E., Fladby, T., Frei, O., Frouin, V., Fukunaga, M., Gareau, T., Glahn, D. C., Grabe, H. J., Groenewold, N. A., Gústafsson, Ó., Haavik, J., Haberg, A. K., Hashimoto, R., Hehir-Kwa, J. Y., Hibar, D. P., Hillegers, M. H. J., Hoffmann, P., Holleran, L., Hottenga, J.-J., Hulshoff Pol, H. E., Ikeda, M., Jacquemont, S., Jahanshad, N., Jockwitz, C., Johansson, S., Jönsson, E. G., Kikuchi, M., Knowles, E. E. M., Kwok, J. B., Le Hellard, S., Linden, D. E. J., Liu, J., Lundervold, A., Lundervold, A. J., Martin, N. G., Mather, K. A., Mathias, S. R., McMahon, K. L., McRae, A. F., Medland, S. E., Moberget, T., Moreau, C., Morris, D. W., Mühleisen, T. W., Murray, R. M., Nordvik, J. E., Nyberg, L., Olde Loohuis, L. M., Ophoff, R. A., Owen, M. J., Paus, T., Pausova, Z., Peralta, J. M., Pike, B., Prieto, C., Quinlan, E. B., Reinbold, C. S., Reis Marques, T., Rucker, J. J. H., Sachdev, P. S., Sando, S. B., Schofield, P. R., Schork, A. J., Schumann, G., Shin, J., Shumskaya, E., Silva, A. I., Sisodiya, S. M., Steen, V. M., Stein, D. J., Strike, L. T., Tamnes, C. K., Teumer, A., Thalamuthu, A., Tordesillas-Gutiérrez, D., Uhlmann, A., Úlfarsson, M. Ö., Van 't Ent, D., Van den Bree, M. B. M., Vassos, E., Wen, W., Wittfeld, K., Wright, M. J., Zayats, T., Dale, A. M., Djurovic, S., Agartz, I., Westlye, L. T., Stefánsson, H., Stefánsson, K., Thompson, P. M., & Andreassen, O. A. (2020). Association of copy number variation of the 15q11.2 BP1-BP2 region with cortical and subcortical morphology and cognition. JAMA Psychiatry, 77(4), 420-430. doi:10.1001/jamapsychiatry.2019.3779.

    Abstract

    Importance Recurrent microdeletions and duplications in the genomic region 15q11.2 between breakpoints 1 (BP1) and 2 (BP2) are associated with neurodevelopmental disorders. These structural variants are present in 0.5% to 1.0% of the population, making 15q11.2 BP1-BP2 the site of the most prevalent known pathogenic copy number variation (CNV). It is unknown to what extent this CNV influences brain structure and affects cognitive abilities.

    Objective To determine the association of the 15q11.2 BP1-BP2 deletion and duplication CNVs with cortical and subcortical brain morphology and cognitive task performance.

    Design, Setting, and Participants In this genetic association study, T1-weighted brain magnetic resonance imaging were combined with genetic data from the ENIGMA-CNV consortium and the UK Biobank, with a replication cohort from Iceland. In total, 203 deletion carriers, 45 247 noncarriers, and 306 duplication carriers were included. Data were collected from August 2015 to April 2019, and data were analyzed from September 2018 to September 2019.

    Main Outcomes and Measures The associations of the CNV with global and regional measures of surface area and cortical thickness as well as subcortical volumes were investigated, correcting for age, age2, sex, scanner, and intracranial volume. Additionally, measures of cognitive ability were analyzed in the full UK Biobank cohort.

    Results Of 45 756 included individuals, the mean (SD) age was 55.8 (18.3) years, and 23 754 (51.9%) were female. Compared with noncarriers, deletion carriers had a lower surface area (Cohen d = −0.41; SE, 0.08; P = 4.9 × 10−8), thicker cortex (Cohen d = 0.36; SE, 0.07; P = 1.3 × 10−7), and a smaller nucleus accumbens (Cohen d = −0.27; SE, 0.07; P = 7.3 × 10−5). There was also a significant negative dose response on cortical thickness (β = −0.24; SE, 0.05; P = 6.8 × 10−7). Regional cortical analyses showed a localization of the effects to the frontal, cingulate, and parietal lobes. Further, cognitive ability was lower for deletion carriers compared with noncarriers on 5 of 7 tasks.

    Conclusions and Relevance These findings, from the largest CNV neuroimaging study to date, provide evidence that 15q11.2 BP1-BP2 structural variation is associated with brain morphology and cognition, with deletion carriers being particularly affected. The pattern of results fits with known molecular functions of genes in the 15q11.2 BP1-BP2 region and suggests involvement of these genes in neuronal plasticity. These neurobiological effects likely contribute to the association of this CNV with neurodevelopmental disorders.
  • Van der Meer, D., Rokicki, J., Kaufmann, T., Córdova-Palomera, A., Moberget, T., Alnæs, D., Bettella, F., Frei, O., Trung Doan, N., Sønderby, I. E., Smeland, O. B., Agartz, I., Bertolino, A., Bralten, J., Brandt, C. L., Buitelaar, J. K., Djurovic, S., Van Donkelaar, M. M. J., Dørum, E. S., Espeseth, T. and 34 moreVan der Meer, D., Rokicki, J., Kaufmann, T., Córdova-Palomera, A., Moberget, T., Alnæs, D., Bettella, F., Frei, O., Trung Doan, N., Sønderby, I. E., Smeland, O. B., Agartz, I., Bertolino, A., Bralten, J., Brandt, C. L., Buitelaar, J. K., Djurovic, S., Van Donkelaar, M. M. J., Dørum, E. S., Espeseth, T., Faraone, S. V., Fernandez, G., Fisher, S. E., Franke, B., Haatveit, B., Hartman, C., Hoekstra, P. J., Haberg, A. K., Jönsson, E. G., Kolskår, K. K., Le Hellard, S., Lund, M. J., Lundervold, A. J., Lundervold, A., Melle, I., Monereo Sánchez, J., Norbom, L. C., Nordvik, J. E., Nyberg, L., Oosterlaan, J., Papalino, M., Papassotiropoulos, A., Pergola, G., De Quervain, D. J. F., Richard, G., Sanders, A.-M., Selvaggi, P., Shumskaya, E., Steen, V. M., Tønnesen, S., Ulrichsen, K. M., Zwiers, M., Andreassen, O. A., & Westlye, L. T. (2020). Brain scans from 21297 individuals reveal the genetic architecture of hippocampal subfield volumes. Molecular Psychiatry, 25, 3053-3065. doi:10.1038/s41380-018-0262-7.

    Abstract

    The hippocampus is a heterogeneous structure, comprising histologically distinguishable subfields. These subfields are differentially involved in memory consolidation, spatial navigation and pattern separation, complex functions often impaired in individuals with brain disorders characterized by reduced hippocampal volume, including Alzheimer’s disease (AD) and schizophrenia. Given the structural and functional heterogeneity of the hippocampal formation, we sought to characterize the subfields’ genetic architecture. T1-weighted brain scans (n = 21,297, 16 cohorts) were processed with the hippocampal subfields algorithm in FreeSurfer v6.0. We ran a genome-wide association analysis on each subfield, co-varying for whole hippocampal volume. We further calculated the single-nucleotide polymorphism (SNP)-based heritability of 12 subfields, as well as their genetic correlation with each other, with other structural brain features and with AD and schizophrenia. All outcome measures were corrected for age, sex and intracranial volume. We found 15 unique genome-wide significant loci across six subfields, of which eight had not been previously linked to the hippocampus. Top SNPs were mapped to genes associated with neuronal differentiation, locomotor behaviour, schizophrenia and AD. The volumes of all the subfields were estimated to be heritable (h2 from 0.14 to 0.27, all p < 1 × 10–16) and clustered together based on their genetic correlations compared with other structural brain features. There was also evidence of genetic overlap of subicular subfield volumes with schizophrenia. We conclude that hippocampal subfields have partly distinct genetic determinants associated with specific biological processes and traits. Taking into account this specificity may increase our understanding of hippocampal neurobiology and associated pathologies.

    Additional information

    41380_2018_262_MOESM1_ESM.docx
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van Berkum, J. J. A., Brown, C. M., Hagoort, P., & Zwitserlood, P. (2003). Event-related brain potentials reflect discourse-referential ambiguity in spoken language comprehension. Psychophysiology, 40(2), 235-248. doi:10.1111/1469-8986.00025.

    Abstract

    In two experiments, we explored the use of event-related brain potentials to selectively track the processes that establish reference during spoken language comprehension. Subjects listened to stories in which a particular noun phrase like "the girl" either uniquely referred to a single referent mentioned in the earlier discourse, or ambiguously referred to two equally suitable referents. Referentially ambiguous nouns ("the girl" with two girls introduced in the discourse context) elicited a frontally dominant and sustained negative shift in brain potentials, emerging within 300–400 ms after acoustic noun onset. The early onset of this effect reveals that reference to a discourse entity can be established very rapidly. Its morphology and distribution suggest that at least some of the processing consequences of referential ambiguity may involve an increased demand on memory resources. Furthermore, because this referentially induced ERP effect is very different from that of well-known ERP effects associated with the semantic (N400) and syntactic (e.g., P600/SPS) aspects of language comprehension, it suggests that ERPs can be used to selectively keep track of three major processes involved in the comprehension of an unfolding piece of discourse.
  • Van Gompel, R. P., & Majid, A. (2003). Antecedent frequency effects during the processing of pronouns. Cognition, 90(3), 255-264. doi:10.1016/S0010-0277(03)00161-6.

    Abstract

    An eye-movement reading experiment investigated whether the ease with which pronouns are processed is affected by the lexical frequency of their antecedent. Reading times following pronouns with infrequent antecedents were faster than following pronouns with frequent antecedents. We argue that this is consistent with a saliency account, according to which infrequent antecedents are more salient than frequent antecedents. The results are not predicted by accounts which claim that readers access all or part of the lexical properties of the antecedent during the processing of pronouns.
  • Van Os, M., De Jong, N. H., & Bosker, H. R. (2020). Fluency in dialogue: Turn‐taking behavior shapes perceived fluency in native and nonnative speech. Language Learning, 70(4), 1183-1217. doi:10.1111/lang.12416.

    Abstract

    Fluency is an important part of research on second language learning, but most research on language proficiency typically has not included oral fluency as part of interaction, even though natural communication usually occurs in conversations. The present study considered aspects of turn-taking behavior as part of the construct of fluency and investigated whether these aspects differentially influence perceived fluency ratings of native and non-native speech. Results from two experiments using acoustically manipulated speech showed that, in native speech, too ‘eager’ (interrupting a question with a fast answer) and too ‘reluctant’ answers (answering slowly after a long turn gap) negatively affected fluency ratings. However, in non-native speech, only too ‘reluctant’ answers led to lower fluency ratings. Thus, we demonstrate that acoustic properties of dialogue are perceived as part of fluency. By adding to our current understanding of dialogue fluency, these lab-based findings carry implications for language teaching and assessment

    Additional information

    data + R analysis script via osf
  • Van Wijk, C., & Kempen, G. (1980). Functiewoorden: Een inventarisatie voor het Nederlands. ITL: Review of Applied Linguistics, 53-68.

Share this page